- NAME
- tcltest - Test harness support code and utilities
- SYNOPSIS
- package require tcltest ?2.0?
- tcltest::test name desc ?option value? ?option value? ...
- tcltest::test name desc {?option value? ?option value? ...}
- tcltest::cleanupTests ?runningMultipleTests?
- tcltest::runAllTests
- tcltest::interpreter ?interp?
- tcltest::singleProcess ?boolean?
- tcltest::debug ?level?
- tcltest::verbose ?levelList?
- tcltest::preserveCore ?level?
- tcltest::testConstraint constraint ?value?
- tcltest::limitConstraints ?constraintList?
- tcltest::workingDirectory ?dir?
- tcltest::temporaryDirectory ?dir?
- tcltest::testsDirectory ?dir?
- tcltest::match ?patternList?
- tcltest::matchFiles ?patternList?
- tcltest::matchDirectories ?patternList?
- tcltest::skip ?patternList?
- tcltest::skipFiles ?patternList?
- tcltest::skipDirectories ?patternList?
- tcltest::loadTestedCommands
- tcltest::loadScript ?script?
- tcltest::loadFile ?filename?
- tcltest::outputChannel ?channelID?
- tcltest::outputFile ?filename?
- tcltest::errorChannel ?channelID?
- tcltest::errorFile ?filename?
- tcltest::makeFile contents name ?directory?
- tcltest::removeFile name ?directory?
- tcltest::makeDirectory name ?directory?
- tcltest::removeDirectory name ?directory?
- tcltest::viewFile name ?directory?
- tcltest::normalizeMsg msg
- tcltest::normalizePath pathVar
- tcltest::bytestring string
- tcltest::saveState
- tcltest::restoreState
- tcltest::threadReap
- DESCRIPTION
- COMMANDS
- tcltest::test name desc ?option value? ?option value? ...
- tcltest::test name desc {?option value? ?option value? ...}
- tcltest::cleanupTests ?runningMultipleTests?
- tcltest::runAllTests
- tcltest::interpreter ?executableName?
- tcltest::singleProcess ?boolean?
- tcltest::debug ?level?
- 0
- 1
- 2
- 3
- tcltest::verbose ?levelList?
- body
- pass
- skip
- start
- error
- tcltest::preserveCore ?level?
- 0
- 1
- 2
- tcltest::testConstraint constraint ?value?
- tcltest::limitConstraints ?constraintList?
- tcltest::workingDirectory ?directoryName?
- tcltest::temporaryDirectory ?directoryName?
- tcltest::testsDirectory ?directoryName?
- tcltest::match ?globPatternList?
- tcltest::matchFiles ?globPatternList?
- tcltest::matchDirectories ?globPatternList?
- tcltest::skip ?globPatternList?
- tcltest::skipFiles ?globPatternList?
- tcltest::skipDirectories ?globPatternList?
- tcltest::loadTestedCommands
- tcltest::loadScript ?script?
- tcltest::loadFile ?filename?
- tcltest::outputChannel ?channelID?
- tcltest::outputFile ?filename?
- tcltest::errorChannel ?channelID?
- tcltest::errorFile ?filename?
- tcltest::makeFile contents name ?directory?
- tcltest::removeFile name ?directory?
- tcltest::makeDirectory name ?directory?
- tcltest::removeDirectory name
- tcltest::viewFile file ?directory?
- tcltest::normalizeMsg msg
- tcltest::normalizePath pathVar
- tcltest::bytestring string
- tcltest::saveState
- tcltest::restoreState
- tcltest::threadReap
- tcltest::mainThread
- TESTS
- -constraints keywordList|expression
- -setup script
- -body script
- -cleanup script
- -match regexp|glob|exact
- -result expectedValue
- -output expectedValue
- -errorOutut expectedValue
- -returnCodes expectedCodeList
- TEST CONSTRAINTS
- singleTestInterp
- unix
- win
- nt
- 95
- 98
- mac
- unixOrWin
- macOrWin
- macOrUnix
- tempNotWin
- tempNotMac
- unixCrash
- winCrash
- macCrash
- emptyTest
- knownBug
- nonPortable
- userInteraction
- interactive
- nonBlockFiles
- asyncPipeClose
- unixExecs
- hasIsoLocale
- root
- notRoot
- eformat
- stdio
- RUNNING TEST FILES
- -help
- -singleproc <bool>
- -verbose <levelList>
- -match <matchList>
- -skip <skipList>
- -file <globPatternList>
- -notfile <globPatternList>
- -relateddir <globPatternList>
- -asidefromdir <globPatternList>
- -constraints <list>
- -limitconstraints <bool>
- -load <script>
- -loadfile <scriptfile>
- -tmpdir <directoryName>
- -testdir <directoryName>
- -preservecore <level>
- -debug <debugLevel>
- -outfile <filename>
- -errfile <filename>
- TEST OUTPUT
- CONTENTS OF A TEST FILE
- SELECTING TESTS FOR EXECUTION
- HOW TO CUSTOMIZE THE TEST HARNESS
- tcltest::PrintUsageInfoHook
- tcltest::processCmdLineArgsFlagHook
- tcltest::processCmdLineArgsHook flags
- tcltest::initConstraintsHook
- tcltest::cleanupTestsHook
- EXAMPLES
- KNOWN ISSUES
- KEYWORDS
tcltest - Test harness support code and utilities
package require tcltest ?2.0?
tcltest::test name desc ?option value? ?option value? ...
tcltest::test name desc {?option value? ?option value? ...}
tcltest::cleanupTests ?runningMultipleTests?
tcltest::runAllTests
tcltest::interpreter ?interp?
tcltest::singleProcess ?boolean?
tcltest::debug ?level?
tcltest::verbose ?levelList?
tcltest::preserveCore ?level?
tcltest::testConstraint constraint ?value?
tcltest::limitConstraints ?constraintList?
tcltest::workingDirectory ?dir?
tcltest::temporaryDirectory ?dir?
tcltest::testsDirectory ?dir?
tcltest::match ?patternList?
tcltest::matchFiles ?patternList?
tcltest::matchDirectories ?patternList?
tcltest::skip ?patternList?
tcltest::skipFiles ?patternList?
tcltest::skipDirectories ?patternList?
tcltest::loadTestedCommands
tcltest::loadScript ?script?
tcltest::loadFile ?filename?
tcltest::outputChannel ?channelID?
tcltest::outputFile ?filename?
tcltest::errorChannel ?channelID?
tcltest::errorFile ?filename?
tcltest::makeFile contents name ?directory?
tcltest::removeFile name ?directory?
tcltest::makeDirectory name ?directory?
tcltest::removeDirectory name ?directory?
tcltest::viewFile name ?directory?
tcltest::normalizeMsg msg
tcltest::normalizePath pathVar
tcltest::bytestring string
tcltest::saveState
tcltest::restoreState
tcltest::threadReap
The tcltest package provides the user with utility tools for
writing and running tests in the Tcl test suite. It can also be used
to create a customized test harness for an extension.
The Tcl test suite consists of multiple .test files, each of which
contains multiple test cases. Each test case consists of a call to
the test command, which specifies the name of test, a short
description, any constraints that apply to the test case, the script
to be run, and expected results. See the "Tests" section for more
details.
It is also possible to add to this test harness to create your own
customized test harness implementation. For more defails, see the
section "How to Customize the Test Harness".
- tcltest::test name desc ?option value? ?option value? ...
-
- tcltest::test name desc {?option value? ?option value? ...}
-
The tcltest::test command runs the value supplied for attribute
script and compares its result to possible results.
It prints an error message if actual results and expected results do
not match. The tcltest::test command returns 0 if it completes
successfully. Any other return value indicates that an error has
occurred in the tcltest package. See the "Tests" section for
more details on this command.
- tcltest::cleanupTests ?runningMultipleTests?
-
This command should be called at the end of a test file. It prints
statistics about the tests run and removes files that were created by
tcltest::makeDirectory and tcltest::makeFile. Names
of files and directories created outside of
tcltest::makeFile and tcltest::makeDirectory and
never deleted are printed to tcltest::outputChannel. This command
also restores the original shell environment, as described by the ::env
array. calledFromAll should be specified if
tcltest::cleanupTests is called explicitly from an "all.tcl"
file. Tcl files files are generally used to run multiple tests. For
more details on how to run multiple tests, please see the section
"Running test files". This proc has no defined return value.
- tcltest::runAllTests
-
This command should be used in your 'all.tcl' file. It is used to
loop over test files and directories, determining which test files to
run and then running them. Note that this test calls
tcltest::cleanupTests; if using this proc in your 'all.tcl' file, you
should not call tcltest::cleanupTests explicitly in that file. See the
sample 'all.tcl' file in the "Examples" section.
- tcltest::interpreter ?executableName?
-
Sets or returns the name of the executable used to invoke the test
suite. This is the interpreter used in runAllTests to run test files
if singleProcess is set to false. The default value for interpreter
is the name of the interpreter in which the tests were started.
- tcltest::singleProcess ?boolean?
-
Sets or returns a boolean indicating whether test files should be sourced
into the current interpreter by runAllTests or run in their own
processes. If boolean is true (1), tests are sourced into the
current interpreter. If boolean is false (0), tests are run in
the interpreter specified in tcltest::interpreter. The default value
for tcltest::singleProcess is false.
- tcltest::debug ?level?
-
Sets or returns the current debug level. The debug level determines
how much tcltest package debugging information is printed to stdout.
The default debug level is 0. Levels are defined as:
- 0
-
Do not display any debug information.
- 1
-
Display information regarding whether a test is skipped because it
doesn't match any of the tests that were specified using -match or
tcltest::match (userSpecifiedNonMatch) or matches any of the tests
specified by -skip or tcltest::skip (userSpecifiedSkip).
- 2
-
Display the flag array parsed by the command line processor, the
contents of the ::env array, and all user-defined variables that exist
in the current namespace as they are used.
- 3
-
Display information regarding what individual procs in the test
harness are doing.
- tcltest::verbose ?levelList?
-
Sets or returns the current verbosity level. The default verbosity
level is "body". See the "Test output" section for a more detailed
explanation of this option. Levels are defined as:
- body
-
Display the body of failed tests
- pass
-
Print output when a test passes
- skip
-
Print output when a test is skipped
- start
-
Print output whenever a test starts
- error
-
Print errorInfo and errorCode, if they exist, when a test return code
does not match its expected return code
- tcltest::preserveCore ?level?
-
Sets or returns the current core preservation level. This level
determines how stringent checks for core files are. The default core
preservation level is 0. Levels are defined as:
- 0
-
No checking - do not check for core files at the end of each test
command, but do check for them whenever tcltest::cleanupTests is
called from tcltest::runAllTests.
- 1
-
Check for core files at the end of each test command and whenever
tcltest::cleanupTests is called from tcltest::runAllTests.
- 2
-
Check for core files at the end of all test commands and whenever
tcltest::cleanupTests is called from all.tcl. Save any core files
produced in tcltest::temporaryDirectory.
- tcltest::testConstraint constraint ?value?
-
Sets or returns the value associated with the named constraint.
See the section "Test constraints" for more information.
- tcltest::limitConstraints ?constraintList?
-
Sets or returns a boolean indicating whether testing is being limited
to constraints listed in constraintList.
If limitConstraints is not false, only those tests with constraints matching
values in constraintList will be run.
- tcltest::workingDirectory ?directoryName?
-
Sets or returns the directory in which the test suite is being run.
The default value for workingDirectory is the directory in which the
test suite was launched.
- tcltest::temporaryDirectory ?directoryName?
-
Sets or returns the output directory for temporary files created by
tcltest::makeFile and tcltest::makeDirectory. This defaults to the
directory returned by tcltest::workingDirectory.
- tcltest::testsDirectory ?directoryName?
-
Sets or returns the directory where the tests reside. This defaults
to the directory returned by tcltest::workingDirectory
if the script cannot determine where the tests directory is
located. This variable should be explicitly set if tests are being run
from an all.tcl file.
- tcltest::match ?globPatternList?
-
Sets or returns the glob pattern list that determines which tests
should be run. Only tests which match one of the glob patterns in
globPatternList are run by the test harness. The default value
for globPatternList is '*'.
- tcltest::matchFiles ?globPatternList?
-
Sets or returns the glob pattern list that determines which test files
should be run. Only test files which match one of the glob patterns in
globPatternList are run by the test harness. The default value
for globPatternList is '*.test'.
- tcltest::matchDirectories ?globPatternList?
-
Sets or returns the glob pattern list that determines which test
subdirectories of the current test directory should be run. Only test
subdirectories which match one of the glob patterns in
globPatternList are run by the test harness. The default value
for globPatternList is '*'.
- tcltest::skip ?globPatternList?
-
Sets or returns the glob pattern list that determines which tests (of
those matched by tcltest::match) should be skipped. The default value
for globPatternList is {}.
- tcltest::skipFiles ?globPatternList?
-
Sets or returns the glob pattern list that determines which test files
(of those matched by tcltest::matchFiles) should be skipped. The
default value for globPatternList is {}.
- tcltest::skipDirectories ?globPatternList?
-
Sets or returns the glob pattern list that determines which test
subdirectories (of those matched by tcltest::matchDirectories) should
be skipped. The default value for globPatternList is {}.
- tcltest::loadTestedCommands
-
This command uses the script specified via the -load or
-loadfile options or the tcltest::loadScript or
tcltest::loadFile procs to load the commands checked by the test suite.
It is allowed to be empty, as the tested commands could have been
compiled into the interpreter running the test suite.
- tcltest::loadScript ?script?
-
Sets or returns the script executed by loadTestedCommands.
- tcltest::loadFile ?filename?
-
Sets ore returns the file name associated with the script executed
loadTestedCommands. If setting filename, this proc will
open the file and call tcltest::loadScript with the content.
- tcltest::outputChannel ?channelID?
-
Sets or returns the output file ID. This defaults to stdout.
Any test that prints test related output should send
that output to tcltest::outputChannel rather than letting
that output default to stdout.
- tcltest::outputFile ?filename?
-
Sets or returns the file name corresponding to the output file. This
defaults to stdout. This proc calls
outputChannel to set the output file channel.
Any test that prints test related output should send
that output to tcltest::outputChannel rather than letting
that output default to stdout.
- tcltest::errorChannel ?channelID?
-
Sets or returns the error file ID. This defaults to stderr.
Any test that prints error messages should send
that output to tcltest::errorChannel rather than printing
directly to stderr.
- tcltest::errorFile ?filename?
-
Sets or returns the file name corresponding to the error file. This
defaults to stderr. This proc calls
errorChannel to set the error file channel.
Any test that prints test related error output should send
that output to tcltest::errorChannel or
tcltest::outputChannel rather than letting
that output default to stdout.
- tcltest::makeFile contents name ?directory?
-
Create a file that will be automatically be removed by
tcltest::cleanupTests at the end of a test file. This file is
created relative to directory. If left unspecified,
directory defaults to tcltest::temporaryDirectory.
Returns the full path of the file created.
- tcltest::removeFile name ?directory?
-
Force the file referenced by name to be removed. This file name
should be relative to directory. If left unspecified,
directory defaults to tcltest::temporaryDirectory. This proc
has no defined return values.
- tcltest::makeDirectory name ?directory?
-
Create a directory named name that will automatically be removed
by tcltest::cleanupTests at the end of a test file. This
directory is created relative to tcltest::temporaryDirectory.
Returns the full path of the directory created.
- tcltest::removeDirectory name
-
Force the directory referenced by name to be removed. This
directory should be relative to directory. If left unspecified,
directory defaults to tcltest::temporaryDirectory. This proc
has no defined return value.
- tcltest::viewFile file ?directory?
-
Returns the contents of file. This file name
should be relative to directory. If left unspecified,
directory defaults to tcltest::temporaryDirectory.
- tcltest::normalizeMsg msg
-
Remove extra newlines from msg.
- tcltest::normalizePath pathVar
-
Resolves symlinks in a path, thus creating a path without internal
redirection. It is assumed that pathVar is absolute.
pathVar is modified in place.
- tcltest::bytestring string
-
Construct a string that consists of the requested sequence of bytes,
as opposed to a string of properly formed UTF-8 characters using the
value supplied in string. This allows the tester to create
denormalized or improperly formed strings to pass to C procedures that
are supposed to accept strings with embedded NULL types and confirm
that a string result has a certain pattern of bytes.
- tcltest::saveState
-
Save procedure and global variable names.
A test file might contain calls to tcltest::saveState and
::tcltest:restoreState if it creates or deletes global variables
or procs.
- tcltest::restoreState
-
Restore procedure and global variable names.
A test file might contain calls to tcltest::saveState and
::tcltest:restoreState if it creates or deletes global variables
or procs.
- tcltest::threadReap
-
tcltest::threadReap only works if testthread is
defined, generally by compiling tcltest. If testthread is
defined, tcltest::threadReap kills all threads except for the
main thread. It gets the ID of the main thread by calling
testthread names during initialization. This value is stored in
tcltest::mainThread. tcltest::threadReap returns the
number of existing threads at completion.
- tcltest::mainThread
-
Sets or returns the main thread ID. This defaults to 1. This is the
only thread that is not killed by tcltest::threadReap and is set
according to the return value of testthread names at
initialization.
The test procedure runs a test script and prints an error
message if the script's result does not match the expected result.
Two syntaxes are provided for specifying the attributes of the tests.
The first uses a separate argument for each of the attributes and
values. The second form places all of the attributes and values
together into a single argument; the argument must have proper list
structure, with teh elements of the list being the attributes and
values. The second form makes it easy to construct multi-line
scripts, since the braces around the whole list make it unnecessary to
include a backslash at the end of each line. In the second form, no
command or variable substitutions are performed on the attribute
names. This makes the behavior of the second form different from the
first form in some cases.
The first form for the test command:
test name description
?-constraints keywordList|expression
?-setup setupScript?
?-body testScript?
?-cleanup cleanupScript?
?-result expectedAnswer?
?-output expectedOutput?
?-errorOutput expectedError?
?-returnCodes codeList?
?-match exact|glob|regexp?
The second form for the test command (adds brace grouping):
test name description {
?-constraints keywordList|expression
?-setup setupScript?
?-body testScript?
?-cleanup cleanupScript?
?-result expectedAnswer?
?-outputexpectedOutput?
?-errorOutput expectedError?
?-returnCodes codeList?
?-match exact|glob|regexp?
}
The name argument should follow the pattern:
<target>-<majorNum>.<minorNum>
For white-box (regression) tests, the target should be the name of the
C function or Tcl procedure being tested. For black-box tests, the
target should be the name of the feature being tested. Related tests
should share a major number.
The description should be a short textual description of the
test. It is generally used to help humans
understand the purpose of the test. The name of a Tcl or C function
being tested should be included in the description for regression
tests. If the test case exists to reproduce a bug, include the bug ID
in the description.
Valid attributes and associated values are:
- -constraints keywordList|expression
-
The optional constraints attribute can be list of one or more
keywords or an expression. If the constraints value consists of
keywords, each of these keywords being the name of a constraint
defined by a call to tcltest::testConstraint. If any of these
elements is false or does
not exist, the test is skipped. If the constraints argument
consists of an expression, that expression is evaluated. If the
expression evaluates to true, then the test is run. Appropriate
constraints should be added to any tests that should
not always be run. See the "Test Constraints" section for a list of built-in
constraints and information on how to add your own constraints.
- -setup script
-
The optional setup attribute indicates a script that will be run
before the script indicated by the script attribute. If setup
fails, the test will fail.
- -body script
-
The body attribute indicates the script to run to carry out the
test. It must return a result that can be checked for correctness.
If left unspecified, the script value will be {}.
- -cleanup script
-
The optional cleanup attribute indicates a script that will be
run after the script indicated by the script attribute. If
cleanup fails, the test will fail.
- -match regexp|glob|exact
-
The match attribute determines how expected answers supplied in
result, output, and errorOutput are compared. Valid
options for the value supplied are ``regexp'', ``glob'', and
``exact''. If match is not specified, the comparisons will be
done in ``exact'' mode by default.
- -result expectedValue
-
The result attribute supplies the comparison value with which
the return value from script will be compared.
If left unspecified, the default
expectedValue will be the empty list.
- -output expectedValue
-
The output attribute supplies the comparison value with which
any output sent to stdout or tcltest::outputChannel during the script
run will be compared. Note that only output printed using
puts is used for comparison. If output is not specified, output
sent to stdout and tcltest::outputChannel is not processed for comparison.
- -errorOutut expectedValue
-
The errorOutput attribute supplies the comparison value with which
any output sent to stderr or tcltest::errorChannel during the script
run will be compared. Note that only output printed using
puts is used for comparison. If errorOutut is not specified, output
sent to stderr and tcltest::errorChannel is not processed for comparison.
- -returnCodes expectedCodeList
-
The optional returnCodes attribute indicates which return codes
from the script supplied with the script attribute are correct.
Default values for expectedCodeList are 0 (normal return) and 2
(return exception). Symbolic values normal (0), error
(1), return (2), break (3), and continue (4) can be
used in the expectedCodeList list.
To pass, a test must successfully execute its setup, script, and
cleanup code. The return code of the test and its return values must
match expected values, and if specified, output and error data from
the test must match expected output and error values. If all of these
conditions are not met, then the test fails.
Constraints are used to determine whether or not a test should be skipped.
If a test is constrained by ``unixOnly'', then it will only be run if
the value of the constraint is true. Several
constraints are defined in the tcltest package. To add
constraints, you can call tcltest::testConstraint
with the appropriate arguments in your own test file.
The following is a list of constraints defined in the tcltest package:
- singleTestInterp
-
test can only be run if all test files are sourced into a single interpreter
- unix
-
test can only be run on any UNIX platform
- win
-
test can only be run on any Windows platform
- nt
-
test can only be run on any Windows NT platform
- 95
-
test can only be run on any Windows 95 platform
- 98
-
test can only be run on any Windows 98 platform
- mac
-
test can only be run on any Mac platform
- unixOrWin
-
test can only be run on a UNIX or Windows platform
- macOrWin
-
test can only be run on a Mac or Windows platform
- macOrUnix
-
test can only be run on a Mac or UNIX platform
- tempNotWin
-
test can not be run on Windows. This flag is used to temporarily
disable a test.
- tempNotMac
-
test can not be run on a Mac. This flag is used
to temporarily disable a test.
- unixCrash
-
test crashes if it's run on UNIX. This flag is used to temporarily
disable a test.
- winCrash
-
test crashes if it's run on Windows. This flag is used to temporarily
disable a test.
- macCrash
-
test crashes if it's run on a Mac. This flag is used to temporarily
disable a test.
- emptyTest
-
test is empty, and so not worth running, but it remains as a
place-holder for a test to be written in the future. This constraint
always causes tests to be skipped.
- knownBug
-
test is known to fail and the bug is not yet fixed. This constraint
always causes tests to be skipped unless the user specifies otherwise.
See the "Introduction" section for more details.
- nonPortable
-
test can only be run in the master Tcl/Tk development environment.
Some tests are inherently non-portable because they depend on things
like word length, file system configuration, window manager, etc.
These tests are only run in the main Tcl development directory where
the configuration is well known. This constraint always causes tests
to be skipped unless the user specifies otherwise.
- userInteraction
-
test requires interaction from the user. This constraint always
causes tests to be skipped unless the user specifies otherwise.
- interactive
-
test can only be run in if the interpreter is in interactive mode
(when the global tcl_interactive variable is set to 1).
- nonBlockFiles
-
test can only be run if platform supports setting files into
nonblocking mode
- asyncPipeClose
-
test can only be run if platform supports async flush and async close
on a pipe
- unixExecs
-
test can only be run if this machine has Unix-style commands
cat, echo, sh, wc, rm, sleep,
fgrep, ps, chmod, and mkdir available
- hasIsoLocale
-
test can only be run if can switch to an ISO locale
- root
-
test can only run if Unix user is root
- notRoot
-
test can only run if Unix user is not root
- eformat
-
test can only run if app has a working version of sprintf with respect
to the "e" format of floating-point numbers.
- stdio
-
test can only be run if the current app can be spawned via a pipe
Use the following command to run a test file that uses package
tcltest:
<shell> <testFile> ?<option> ?<value>?? ...
Command line options include (tcltest accessor procs that
correspond to each flag are listed at the end of each flag description
in parenthesis):
- -help
-
display usage information.
- -singleproc <bool>
-
if <bool> is 0, run test files in separate interpreters. if 1, source test
files into the current intpreter. (tcltest::singleProcess)
- -verbose <levelList>
-
set the level of verbosity to a list containing 0 or more of "body",
"pass", "skip", "start", and "error". See the "Test output" section
for an explanation of this option. (tcltest::verbose)
- -match <matchList>
-
only run tests that match one or more of the glob patterns in
<matchList>. (tcltest::match)
- -skip <skipList>
-
do not run tests that match one or more of the glob patterns in
<skipList>. (tcltest::skip)
- -file <globPatternList>
-
only source test files that match any of the items in
<globPatternList> relative to tcltest::testsDirectory.
This option
only makes sense if you are running tests using "all.tcl" as the
<testFile> instead of running single test files directly.
(tcltest::matchFiles)
- -notfile <globPatternList>
-
source files except for those that match any of the items in
<globPatternList> relative to tcltest::testsDirectory.
This option
only makes sense if you are running tests using "all.tcl" as the
<testFile> instead of running single test files directly.
(tcltest::skipFiles)
- -relateddir <globPatternList>
-
only run tests in directories that match any of the items in
<globPatternList> relative to tcltest::testsDirectory.
This option
only makes sense if you are running tests using "all.tcl" as the
<testFile> instead of running single test files directly.
(tcltest::matchDirectories)
- -asidefromdir <globPatternList>
-
run tests in directories except for those that match any of the items in
<globPatternList> relative to tcltest::testsDirectory.
This option
only makes sense if you are running tests using "all.tcl" as the
<testFile> instead of running single test files directly.
(tcltest::skipDirectories)
- -constraints <list>
-
tests with any constraints in <list> will not be skipped. Note that
elements of <list> must exactly match the existing constraints. This
is useful if you want to make sure that tests with a particular
constraint are run (for example, if the tester wants to run all tests
with the knownBug constraint).
(tcltest::testConstraint)
- -limitconstraints <bool>
-
If the argument to this flag is 1, the test harness limits test runs
to those tests that match the constraints listed by the -constraints
flag. Use of this flag requires use of the -constraints flag. The
default value for this flag is 0 (false). This is useful if you want
to run only those tests that match the constraints listed using
the -constraints option. A tester might want to do this if (for
example) he were
interested in running only those tests that are constrained to be
unixOnly and no other tests.
(tcltest::limitConstraints)
- -load <script>
-
will use the specified script to load the commands under test
(tcltest::loadTestedCommands). The default is the empty
script. See -loadfile below too. (tcltest::loadScript)
- -loadfile <scriptfile>
-
will use the contents of the named file to load the commands under
test (tcltest::loadTestedCommands). See -load above too. The default
is the empty script. (tcltest::loadFile)
- -tmpdir <directoryName>
-
put any temporary files (created with tcltest::makeFile and
tcltest::makeDirectory) into the named directory. The default
location is tcltest::workingDirectory. (tcltest::temporaryDirectory)
- -testdir <directoryName>
-
search the test suite to execute in the named directory. The default
location is tcltest::workingDirectory. (tcltest::testsDirectory)
- -preservecore <level>
-
check for core files. This flag is used to determine how much
checking should be done for core files. (tcltest::preserveCore)
- -debug <debugLevel>
-
print debug information to stdout. This is used to debug code in the
tcltest package. (tcltest::debug)
- -outfile <filename>
-
print output generated by the tcltest package to the named file. This
defaults to stdout. Note that debug output always goes to stdout,
regardless of this flag's setting. (tcltest::outputFile)
- -errfile <filename>
-
print errors generated by the tcltest package to the named file. This
defaults to stderr. (tcltest::errorFile)
You can specify any of the above options on the command line or by
defining an environment variable named TCLTEST_OPTIONS containing a
list of options (e.g. "-debug 3 -verbose 'pass skip'"). This
environment variable is evaluated before the command line arguments.
Options specified on the command line override those specified in
TCLTEST_OPTIONS.
A second way to run tets is to start up a shell, load the
tcltest package, and then source an appropriate test file or use
the test command. To use the options in interactive mode, set
their corresponding tcltest namespace variables after loading the
package.
See "Test Constraints" for a list of all built-in constraint names.
A final way to run tests would be to specify which test files to run
within an all.tcl (or otherwise named) file. This is the
approach used by the Tcl test suite. This file loads the tcltest
package, sets the location of
the test directory (tcltest::testsDirectory), and then calls the
tcltest::runAllTests proc, which determines which test
files to run, how to run them, and calls tcltest::cleanupTests to
determine the summary status of the test suite.
A more elaborate all.tcl file might do some pre- and
post-processing before sourcing
each .test file, use separate interpreters for each file, or handle
complex directory structures.
For an example of an all.tcl file,
please see the "Examples" section of this document.
After all specified test files are run, the number of tests
passed, skipped, and failed is printed to
tcltest::outputChannel. Aside from this
statistical information, output can be controlled on a per-test basis
by the tcltest::verbose variable.
tcltest::verbose can be set to any combination of "body",
"skip", "pass", "start", or "error". The default value of
tcltest::verbose is "body". If "body" is present, then the
entire body of the test is printed for each failed test, otherwise
only the test's name, desired output, and
actual output, are printed for each failed test. If "pass" is present,
then a line is printed for each passed test, otherwise no line is
printed for passed tests. If "skip" is present, then a line (containing
the consraints that cause the test to be skipped) is printed for each
skipped test, otherwise no line is printed for skipped tests. If "start"
is present, then a line is printed each time a new test starts.
If "error" is present, then the content of errorInfo and errorCode (if
they are defined) is printed for each test whose return code doesn't
match its expected return code.
You can set tcltest::verbose either interactively (after the
tcltest package has been loaded) or by using the command line
argument -verbose, for example:
tclsh socket.test -verbose 'body pass skip'
Test files should begin by loading the tcltest package:
package require tcltest
namespace import -force tcltest::*
Test files should end by cleaning up after themselves and calling
tcltest::cleanupTests. The tcltest::cleanupTests
procedure prints statistics about the number of tests that passed,
skipped, and failed, and removes all files that were created using the
tcltest::makeFile and tcltest::makeDirectory procedures.
# Remove files created by these tests
# Change to original working directory
# Unset global arrays
tcltest::cleanupTests
return
When naming test files, file names should end with a .test extension.
The names of test files that contain regression (or glass-box) tests
should correspond to the Tcl or C code file that they are testing.
For example, the test file for the C file "tclCmdAH.c" is "cmdAH.test".
Normally, all the tests in a file are run whenever the file is
sourced. An individual test will be skipped if one of the following
conditions is met:
- [1]
-
the name of the tests does not match (using glob style matching)
one or more elements in the tcltest::match variable
- [2]
-
the name of the tests matches (using glob style matching) one or
more elements in the tcltest::skip variable
- [3]
-
the constraints argument to the tcltest::test call, if
given, contains one or more false elements.
You can set tcltest::match and/or tcltest::skip
either interactively (after the tcltest package has been
sourced), or by using the command line arguments -match and
-skip, for example:
tclsh info.test -match '*-5.* *-7.*' -skip '*-7.1*'
Be sure to use the proper quoting convention so that your shell does
not perform the glob substitution on the match or skip patterns you
specify.
Predefined constraints (e.g. knownBug and nonPortable) can be
overridden either interactively (after the tcltest package has been
sourced) by setting the proper constraint
or by using the -constraints command line option with the name of the
constraint in the argument. The following example shows how to run
tests that are constrained by the knownBug and nonPortable
restrictions:
tclsh all.tcl -constraints "knownBug nonPortable"
See the "Constraints" section for information about using
built-in constraints and adding new ones.
When tests are run from within an all.tcl file, all files with a
``.test'' extension are normally run. An individual test file will
be skipped if one of the following conditions is met:
- [1]
-
the name of the test files does not match (using glob style matching)
one or more elements in the tcltest::matchFiles variable
- [2]
-
the name of the test file matches (using glob style matching) one or
more elements in the tcltest::skipFiles variable
You can set tcltest::matchFiles and/or tcltest::skipFiles
either interactively (after the tcltest package has been
sourced), or by using the command line arguments -file and
-notfile, for example:
tclsh info.test -file 'unix*.test' -notfile 'unixNotfy.test'
Additionally, if tests are run from within an 'all.tcl' containing a
call to tcltest::runAllTests, any subdirectory of
tcltest::testsDirectory containing an 'all.tcl' file will also
be run. Individual test subdirectories will be skipped if one of the
following conditions is met:
- [1]
-
the name of the directory does not match (using glob style matching)
one or more elements in the tcltest::matchDirectories variable
- [2]
-
the name of the directory matches (using glob style matching) one or
more elements in the tcltest::skipDirectories variable
You can set tcltest::matchDirectories and/or tcltest::skipDirectories
either interactively (after the tcltest package has been
sourced), or by using the command line arguments -relateddir and
-asidefromdir, for example:
tclsh info.test -relateddir 'subdir*' -asidefromdir 'subdir2'
To create your own custom test harness, create a .tcl file that contains your
namespace. Within this file, require package tcltest. Commands
that can be redefined to customize the test harness include:
- tcltest::PrintUsageInfoHook
-
print additional usage information specific to your situation.
- tcltest::processCmdLineArgsFlagHook
-
tell the test harness about additional flags that you want it to understand.
- tcltest::processCmdLineArgsHook flags
-
process the additional flags that you told the harness about in
tcltest::processCmdLineArgsFlagHook.
- tcltest::initConstraintsHook
-
used to add additional built-in constraints to those already defined
by tcltest.
- tcltest::cleanupTestsHook
-
do additional cleanup
To add new flags to your customized test harness, redefine
tcltest::processCmdLineArgsAddFlagHook to define additional flags to be
parsed and tcltest::processCmdLineArgsHook to actually process them.
For example:
proc tcltest::processCmdLineArgsAddFlagHook {} {
return [list -flag1 -flag2]
}
proc tcltest::processCmdLineArgsHook {flagArray} {
array set flag $flagArray
if {[info exists flag(-flag1)]} {
# Handle flag1
}
if {[info exists flag(-flag2)]} {
# Handle flag2
}
return
}
You may also want to add usage information for these flags. This
information would be displayed whenever the user specifies -help. To
define additional usage information, define your own
tcltest::PrintUsageInfoHook proc. Within this proc, you should
print out additional usage information for any flags that you've
implemented.
To add new built-in
constraints to the test harness, define your own version of
tcltest::initConstraintsHook.
Within your proc, you can add to the tcltest::testConstraints array.
For example:
proc tcltest::initConstraintsHook {} {
set tcltest::testConstraints(win95Or98) \
[expr {$tcltest::testConstraints(95) || \
$tcltest::testConstraints(98)}]
}
Finally, if you want to add additional cleanup code to your harness
you can define your own tcltest::cleanupTestsHook. For example:
proc tcltest::cleanupTestsHook {} {
# Add your cleanup code here
}
- [1]
-
A simple test file (foo.test)
package require tcltest
namespace import -force ::tcltest::*
test foo-1.1 {save 1 in variable name foo} -body {set foo 1} -result 1
tcltest::cleanupTests
return
- [2]
-
A simple all.tcl
package require tcltest
namespace import -force ::tcltest::*
tcltest::testsDirectory [file dir [info script]]
tcltest::runAllTests
return
- [3]
-
Running a single test
tclsh foo.test
- [4]
-
Running multiple tests
tclsh all.tcl -file 'foo*.test' -notfile 'foo2.test'
- [5]
-
A test that uses the unixOnly constraint and should only be
run on Unix
test getAttribute-1.1 {testing file permissions} {
-constraints {unixOnly}
-body {
lindex [file attributes foo.tcl] 5
}
-result {00644}
}
- [6]
-
A test containing an constraint expression that evaluates to true (a case where the test would be run) if it is being run on unix and if threads are not being tested
test testOnUnixWithoutThreads-1.1 {
this test runs only on unix and only if we're not testing
threads
} {
-constraints {unixOnly && !testthread}
-body {
# some script goes here
}
}
There are two known issues related to nested test commands.
The first issue relates to the stack level in which test scripts are
executed. Tests nested within other tests may be executed at the same
stack level as the outermost test. For example, in the following test
code:
test level-1.1 {level 1} {
-body {
test level-2.1 {level 2} {
}
}
}
any script executed in level-2.1 may be executed at the same stack
level as the script defined for level-1.1.
In addition, while two
test commands have been run, results will only be reported for tests
at the same level as test level-1.1. However, test results for all
tests run prior to level-1.1 will be available when test level-2.1
runs. What this means is that if you try to access the test results
for test level-2.1, it will may say that 'm' tests have run, 'n' tests
have been skipped, 'o' tests have passed and 'p' tests have failed,
where 'm', 'n', 'o', and 'p' refer to tests that were run at the same
test level as test level-1.1.
Implementation of output and error comparison in the test command
depends on usage of puts in your application code. Output is
intercepted by redefining the puts command while the defined test
script is being run. Errors thrown by C procedures or printed
directly from C applications will not be caught by the test command.
Therefore, usage of expect_out and expect_err in the test command is
useful only for pure Tcl applications that use the puts command for
output.
test, test harness, test suite
Copyright © 1990-1994 The Regents of the University of California
Copyright © 1994-1997 Sun Microsystems, Inc.
Copyright © 1998-1999 Scriptics Corporation
Copyright © 2000 Ajuba Solutions
Copyright © 1995-1997 Roger E. Critchlow Jr.