Skip to content

Commit c8bd00a

Browse files
committed
FAB-3583 systemtest placeholders, readme
Includes: + updated daily README file in this patch request + a simple sample test scripts driver/wrapper, to show syntax to enable others to easily include tests in the daily and weekly CI test suites (these are in the daily and weekly folders, to be kicked off by jenkins in separate jobs - not to be confused with many other unit-tests and behave tests that are run with every checkin) + in addition to the sample, this includes also a python driver for system tests written with PTE Any new tests can be written using any type of language and simply incorporated into a python wrapper like this. Note this driver will produce xml output, which Jenkins will use as input to a display board. Jenkins jobs will be created to execute some of these tests as part of a daily test suite, as well as a longer, weekly test suite. In the test_pte.py driver, some planned PTE system testnames and comments are added - but are currently stubbed out with a TestPlaceholder, so those tests will fail (as planned - until the actual tests are inserted as they become available). For instance, executing the weekly test suite (only 2 tests) currently would produce the following: ``` $ cd .../fabric/test/regression/weekly $ py.test -v --junitxml results.xml ./test_pte.py ============= test session starts ============= platform linux2 -- Python 2.7.6 -- pytest-2.5.1 -- /usr/bin/python collected 2 items test_pte.py <- LevelDB_Perf_Stress.test_FAB3601_Standard_72Hr SKIPPED test_pte.py <- CouchDB_Perf_Stress.test_FAB3602_Standard_72Hr SKIPPED ----- generated xml file: /home/scottz/work/src/github.com/hyperledger/fabric/test/regression/weekly/results.xml ----- ===== 2 skipped in 0.04 seconds ===== ``` Change-Id: I1f259f303d41a73456fa1f4132dcff7206af98fd Signed-off-by: Scott Zwierzynski <[email protected]>
1 parent 8f4b6a9 commit c8bd00a

14 files changed

+623
-6
lines changed

test/regression/daily/README.md

+121
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,121 @@
1+
# Daily Test Suite
2+
3+
This readme explains everything there is to know about our daily regression test suite. *Note 1*: This applies similarly for both the **test/regression/daily/** and **test/regression/weekly/** test suites. *Note 2*: The Release Criteria (**test/regression/release/**) test suite is a subset of all the Daily and Weekly tests.
4+
5+
- How to Run the Tests
6+
- Where to View the Results produced by the daily automation tests
7+
- Where to Find Existing Tests
8+
- How to Add New Tests to the Automated Test Suite
9+
* Why Test Output Format Must Be *xml* and How to Make It So
10+
* Alternative 1: Add a test using an existing tool and test driver script
11+
* Alternative 2: Add a new test with a new tool and new test driver script
12+
* How to Add a New Chaincode Test
13+
14+
## How to Run the Tests, and Where to View the Results
15+
16+
Everything starts with [runDailyTestSuite.sh](./runDailyTestSuite.sh), which invokes all test driver scripts, such as **test_pte.py** and **test_chaincodes.py**. Together, these driver scripts initiate all tests in the daily test suite. You can manually execute **runDailyTestSuite.sh** in its entirety, or, run one any one of the test driver scripts on the command line. Or, you may simply view the results generated daily by an automated Continuous Improvement (CI) tool which executes **runDailyTestSuite.sh**. Reports are displayed on the [Daily Test Suite Results Page](https://jenkins.hyperledger.org/view/Daily/job/fabric-daily-chaincode-tests-x86_64/test_results_analyzer). When you look at the reports; click the buttons in the **'See children'** column to see the results breakdown by component and by individual tests.
17+
18+
#### Where to Find Existing Tests
19+
20+
Examine the driver scripts to find the individual tests, which are actually stored in several locations under **/path/to/fabric/test/**. Some tests are located in test suite subdirectories such as
21+
22+
- **test/regression/daily/chaincodeTests/**
23+
24+
whereas other tests are located in the tools directories themselves, such as
25+
26+
- **test/feature/ft/** - User-friendly *Behave* functional tests feature files
27+
- **test/tools/PTE/** - Performance Traffic Engine *(PTE)* tool and tests
28+
- **test/tools/OTE/** - Orderer Traffic Engine *(OTE)* tool and tests
29+
30+
Each testcase title should provide the test objective and a Jira FAB issue which can be referenced for more information. Test steps and specific details can be found in the summary comments of the test scripts themselves. Additional information can be found in the README files associated with the various test directories.
31+
32+
## How to Add New Tests to the Automated Test Suite
33+
34+
We love contributors! Anyone may add a new test to an existing test driver script, or even create a new tool and new test driver script. The steps for both scenarios are provided further below as *Alternative 1* and *Alternative 2*. First, a few things to note:
35+
36+
- Before linking a test case into the CI automation tests, please merge your (tool and) testcase into gerrit, and create a Jira task, as follows:
37+
38+
1. First merge your tool and tests to gerrit in appropriate folders under **/path/to/fabric/test/**.
39+
1. Of course, all tests must pass before being submitted. We do not want to see any false positives for test case failures.
40+
1. To integrate your new tests into the CI automation test suite, create a new Jira task FAB-nnnn for each testcase, and use 'relates-to' to link it to epic FAB-3770.
41+
1. You will this new Jira task to submit a changeset to gerrit, to invoke your testcase from a driver script similar to **/path/to/fabric/test/regression/daily/test_example.py**. In the comments of the gerrit merge request submission, include the
42+
- Jira task FAB-nnnn
43+
- the testcase title and objective
44+
- copy and fill in the template from Jira epic FAB-3770
45+
1. Follow all the steps below in either *Alternative*, and then the test will be executed automatically as part of the next running of the CI daily test suite. The results will show up on the daily test suite display board - which can be viewed by following the link at the top of this page.
46+
47+
#### Why Test Output Format Must Be *xml* and How to Make It So
48+
49+
The Continuous Improvement (CI) team utilizes a Jenkins job to execute the full test suite, **runDailyTestSuite.sh**. The CI job consumes xml output files, creates reports, and displays them. *Note: When adding new scripts that generate new xml files, if you do not see the results displayed correctly, please contact us on [Rocket.Chat channel #fabric-ci](https://chat.hyperledger.org).* For this reason, we execute tests in one of the following ways:
50+
51+
1. Invoke the individual testcase from within a test driver script in **regression/daily/**. There are many examples here, such as **test_example.py** and **test_pte.py**. These test driver scripts are basically wrappers written in python, which makes it easy to produce the desired junitxml output format required for displaying reports. This method is useful for almost any test language, including bash, tool binaries, and more. More details are provided below explaining how to call testcases from within a test driver script. Here we show how simple it is to execute the test driver and all the testcases within it. *Note: File 'example_results.xml' will be created, containing the test output.*
52+
53+
```
54+
cd /path/to/fabric/test/regression/daily
55+
py.test -v --junitxml example_results.xml ./test_example.py
56+
```
57+
58+
1. Execute 'go test', and pipe the output through tool github.com/jstemmer/go-junit-report to convert to xml. *Note: In the example shown, file 'results.xml' will be created with the test output.*
59+
60+
```
61+
cd /path/to/fabric/test/tools/OTE
62+
go get github.com/jstemmer/go-junit-report
63+
go test -run ORD77 -v | go-junit-report >> results.xml
64+
```
65+
66+
1. *If you know another method that produces xml files that can be displayed correctly, please share it here!*
67+
68+
### Alternative 1: Add a test using an existing tool and test driver script
69+
70+
To add another test using an existing tool (such as **PTE**), simply add a test inside the existing test driver (such as **test_pte.py**). It is as simple as copying a block of ten lines and modify these things:
71+
72+
1. Insert the testcase in the correct test component class and edit the test name
73+
1. Edit the testcase description
74+
1. Edit the specified command and arguments to be executed
75+
1. Edit the asserted test result to be matched
76+
77+
Refer to **test_example.py** for a model to clone and get started quickly. The testcases should use the format shown in this example:
78+
79+
```
80+
def test_FAB9876_1K_Payload(self):
81+
'''
82+
Launch standard network.
83+
Use PTE stress mode to send 100 invoke transactions
84+
concurrently to all peers on all channels on all
85+
chaincodes, with 1K payloads. Query the ledger for
86+
each to ensure the last transaction was written,
87+
calculate tps, and remove network and cleanup
88+
'''
89+
result = subprocess.check_output("../../tools/PTE/tests/run1KPayloadTest.sh", shell=True)
90+
self.assertIn(TEST_PASS_STRING, result)
91+
```
92+
93+
### Alternative 2: Add a new test with a new tool and new test driver script
94+
95+
Adding a new test with a new tool involves a few more steps.
96+
97+
1. Create and merge a new tool, for example, **/path/to/fabric/test/tools/NewTool/newTool.sh**
98+
1. Create a new test driver script such as **/path/to/fabric/test/regression/daily/test_newTool.py**. Model it after others like **test_example.py**, found under driver scripts under **/path/to/test/regression/daily/** and **test/regression/weekly/**. Note: the filename must start with 'test_'.
99+
1. Add your new testcases to **test_newTool.py**. The testcases should use the following format. Refer also to the steps described in Alternative 1, above.
100+
101+
```
102+
class <component_feature>(unittest.TestCase):
103+
def test_<FAB9999>_<title>(self):
104+
'''
105+
<Network Configuration>
106+
<Test Description and Test Objective>
107+
'''
108+
result = subprocess.check_output("<command to invoke newTool.sh arg1 arg2>", shell=True)
109+
self.assertIn("<string from stdout of newTool that indicates PASS>", result)
110+
```
111+
112+
1. Edit **/path/to/test/regression/daily/runDailyTestSuite.sh** to run the new testcases. Add a new line, or append your new test driver scriptname **test_newTool.py** to an existing line:
113+
114+
```
115+
py.test -v --junitxml results.xml test_example.py test_newTool.py
116+
```
117+
118+
### How to Add a New Chaincode Test
119+
120+
To leverage our CI mechanism to automatically test your own ChainCode daily, refer to [this regression/daily/chaincodeTests/README](./chaincodeTests/README.rst) for instructions.
121+

test/regression/daily/README.rst

-2
This file was deleted.

test/regression/daily/README.rst.orig

+80
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
# Daily Test Suite
2+
Click here for the latest daily status report (WIP: link TBD)
3+
4+
## Running tests
5+
The entire suite of tests may be executed from script [runDailyTestSuite.sh](runDailyTestSuite.sh).
6+
Refer to that script for more details about how to invoke each test.
7+
8+
## Adding new tests
9+
Contributors may add a new test to an existing related test tool group, or create a new one.
10+
Some examples:
11+
12+
1. To add another test to an existing test suite subgroup, such as
13+
the Performance Traffic Engine (PTE) tool,
14+
add a test inside the existing python wrapper test_pte.py.
15+
The header comment section inside that script contains
16+
detailed steps explaining how to do so. In brief, it is as simple as
17+
copying a block of about nine lines and modify three things:
18+
19+
```
20+
(A) edit the testcase comments
21+
(B) edit the line which specifies the command and arguments to execute
22+
(C) edit the line that specifies the test result to be matched
23+
```
24+
25+
2. To add a new test with a new tool, it involves a few more steps:
26+
27+
```
28+
(A) Create and merge a new tool such as .../fabric/test/tools/NewTool/newTool.sh
29+
(B) create a new file .../fabric/test/regression/daily/test_newTool.py
30+
and define a python wrapper to invoke the new tool.
31+
Model it after others like test_example.py; the file should
32+
contain a testcase that looks something like this:
33+
34+
def test_TLS(self):
35+
'''
36+
FAB-2032,FAB-3593
37+
Network: 2 Ord, 5 KB, 3 ZK, 2 Org, 4 Peers, 10 Chan, 10 CC
38+
Launch network, use NewTool to wreak havoc on the network by
39+
doing something crazy, and ensure the network handles it gracefully.
40+
Then remove network and cleanup.
41+
'''
42+
result = subprocess.check_output("../../tools/NewTool/newTool.sh arg1 arg2", shell=True)
43+
self.assertIn("A STRING from stdout of NewTool that indicates PASS", result)
44+
45+
(C) add lines at the bottom of runDailyTestSuite.sh to
46+
invoke the new testcase(s) using the new tool:
47+
48+
py.test -v --junitxml results.xml ./test_example.py
49+
```
50+
51+
### Test Output: formatting requirements
52+
The Jenkins automation tool that runs the test suite expects
53+
to receive xml output to display. For this reason, we execute
54+
tests in one of the following ways:
55+
56+
Option 1. (Useful for any test language including bash, tool binaries, etc):
57+
Invoke the test from within a python wrapper script, which allows
58+
searching the stdout for a user-defined test result string.
59+
Using the python wrapper makes it easy to provide the desired
60+
junitxml output format. For example:
61+
62+
```
63+
py.test -v --junitxml results.xml ./test_example.py
64+
```
65+
66+
Option 2. (Useful for GO tests):
67+
Execute "go" tests, and pipe output through a tool such as
68+
github.com/jstemmer/go-junit-report to convert to xml. e.g.:
69+
70+
```
71+
cd ../../tools/OTE
72+
go get github.com/jstemmer/go-junit-report
73+
go test -run ORD7 -v | go-junit-report >> results.xml
74+
```
75+
76+
## Test Descriptions
77+
78+
[Test Descriptions](README_testdescriptions.rst)
79+
80+
[ChainCode Tests descriptions and how-to](chaincodeTests/README.rst)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
#!/bin/bash
2+
3+
echo "TEST $0 RESULT=FAIL"
4+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
#!/bin/bash
2+
3+
echo "TEST $0 RESULT=PASS"
4+
+4
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
#!/bin/bash
2+
3+
echo "FUNCTION CALL TO $0 NEEDS REPLACING WITH ACTUAL TEST"
4+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
#!/bin/bash
2+
3+
echo "========== Example tests and PTE system tests..."
4+
py.test -v --junitxml results.xml test_example.py test_pte.py
5+
6+
echo "========== Chaincode tests..."
7+
chaincodeTests/runChaincodes.sh
8+

test/regression/daily/test_example.py

+50
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
2+
# To run this:
3+
# Install: sudo apt-get install python python-pytest
4+
# Install: sudo pip install xmlrunner
5+
# At command line: py.test -v --junitxml results.xml ./test_example.py
6+
7+
import unittest
8+
import xmlrunner
9+
import subprocess
10+
11+
TEST_PASS_STRING="RESULT=PASS"
12+
13+
class SampleTest(unittest.TestCase):
14+
@unittest.skip("skipping")
15+
def test_skipped(self):
16+
'''
17+
This test will be skipped.
18+
'''
19+
self.fail("I should not see this")
20+
21+
def test_SampleAdditionTestWillPass(self):
22+
'''
23+
This test will pass.
24+
'''
25+
result = subprocess.check_output("echo '7+3' | bc", shell=True)
26+
self.assertEqual(int(result.strip()), 10)
27+
28+
def test_SampleStringTestWillPass(self):
29+
'''
30+
This test will pass.
31+
'''
32+
result = subprocess.check_output("echo '7+3'", shell=True)
33+
self.assertEqual(result.strip(), "7+3")
34+
35+
def test_SampleScriptPassTest(self):
36+
'''
37+
This test will pass because the executed script prints the RESULT=PASS string to stdout
38+
'''
39+
result = subprocess.check_output("./SampleScriptPassTest.sh", shell=True)
40+
self.assertIn(TEST_PASS_STRING, result)
41+
42+
def test_SampleScriptFailTest(self):
43+
'''
44+
This test will pass because the executed script does NOT print the RESULT=PASS string to stdout
45+
'''
46+
result = subprocess.check_output("./SampleScriptFailTest.sh", shell=True)
47+
self.assertNotIn(TEST_PASS_STRING, result)
48+
49+
if __name__ == '__main__':
50+
unittest.main(testRunner=xmlrunner.XMLTestRunner(output='runner-results'))

0 commit comments

Comments
 (0)