Skip to content

Commit a4dbc17

Browse files
committed
FAB-3932 standardize test names
+ Renamed some files and testnames to meet the standardized naming conventions, for a better user experience when viewing test results on CI reports page for all the tests in the daily and weekly test suite. + Added some scripts to facilitate running the weekly test suite in groups in parallel. + Use Readme doc syntax strategy to use italics font format for filenames and paths, and use bold font to highlight other key phrases. Change-Id: Icdecd4c5db74e10103daad2f2e4424cd5028a26a Signed-off-by: Scott Zwierzynski <[email protected]>
1 parent 0509bed commit a4dbc17

9 files changed

+89
-35
lines changed

test/regression/daily/test_example.py test/regression/daily/Example.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
# To run this:
33
# Install: sudo apt-get install python python-pytest
44
# Install: sudo pip install xmlrunner
5-
# At command line: py.test -v --junitxml results.xml ./test_example.py
5+
# At command line: py.test -v --junitxml results_sample.xml Example.py
66

77
import unittest
88
import xmlrunner
@@ -21,7 +21,7 @@ def test_skipped(self):
2121
# This runs on ubuntu x86 laptop, but it fails when run by CI, because
2222
# "bc" is not installed on the servers used for CI jobs.
2323
@unittest.skip("skipping")
24-
def test_SampleAdditionTestWillPass(self):
24+
def test_SampleAdditionTestSkippedButWillPassIfInstallBC(self):
2525
'''
2626
This test will pass.
2727
'''

test/regression/daily/README.md

+24-24
Original file line numberDiff line numberDiff line change
@@ -1,61 +1,61 @@
11
# Daily Test Suite
22

3-
This readme explains everything there is to know about our daily regression test suite. *Note 1*: This applies similarly for both the **test/regression/daily/** and **test/regression/weekly/** test suites. *Note 2*: The Release Criteria (**test/regression/release/**) test suite is a subset of all the Daily and Weekly tests.
3+
This readme explains everything there is to know about our daily regression test suite. **Note 1**: This applies similarly for both the *test/regression/daily/* and *test/regression/weekly/* test suites. **Note 2**: The Release Criteria (*test/regression/release/*) test suite is a subset of all the Daily and Weekly tests.
44

55
- How to Run the Tests
66
- Where to View the Results produced by the daily automation tests
77
- Where to Find Existing Tests
88
- How to Add New Tests to the Automated Test Suite
9-
* Why Test Output Format Must Be *xml* and How to Make It So
9+
* Why Test Output Format Must Be **xml** and How to Make It So
1010
* Alternative 1: Add a test using an existing tool and test driver script
1111
* Alternative 2: Add a new test with a new tool and new test driver script
1212
* How to Add a New Chaincode Test
1313

1414
## How to Run the Tests, and Where to View the Results
1515

16-
Everything starts with [runDailyTestSuite.sh](./runDailyTestSuite.sh), which invokes all test driver scripts, such as **test_pte.py** and **test_chaincodes.py**. Together, these driver scripts initiate all tests in the daily test suite. You can manually execute **runDailyTestSuite.sh** in its entirety, or, run one any one of the test driver scripts on the command line. Or, you may simply view the results generated daily by an automated Continuous Improvement (CI) tool which executes **runDailyTestSuite.sh**. Reports are displayed on the [Daily Test Suite Results Page](https://jenkins.hyperledger.org/view/Daily/job/fabric-daily-chaincode-tests-x86_64/test_results_analyzer). When you look at the reports; click the buttons in the **'See children'** column to see the results breakdown by component and by individual tests.
16+
Everything starts with [runDailyTestSuite.sh](./runDailyTestSuite.sh), which invokes all test driver scripts, such as *systest_pte.py* and *chaincodes.py*. Together, these driver scripts initiate all tests in the daily test suite. You can manually execute *runDailyTestSuite.sh* in its entirety, or, run one any one of the test driver scripts on the command line. Or, you may simply view the results generated daily by an automated Continuous Improvement (CI) tool which executes *runDailyTestSuite.sh*. Reports are displayed on the [Daily Test Suite Results Page](https://jenkins.hyperledger.org/view/Daily/job/fabric-daily-chaincode-tests-x86_64/test_results_analyzer). When you look at the reports; click the buttons in the **'See children'** column to see the results breakdown by component and by individual tests.
1717

1818
#### Where to Find Existing Tests
1919

20-
Examine the driver scripts to find the individual tests, which are actually stored in several locations under **/path/to/fabric/test/**. Some tests are located in test suite subdirectories such as
20+
Examine the driver scripts to find the individual tests, which are actually stored in several locations under */path/to/fabric/test/*. Some tests are located in test suite subdirectories such as
2121

22-
- **test/regression/daily/chaincodeTests/**
22+
- *test/regression/daily/chaincodeTests/*
2323

2424
whereas other tests are located in the tools directories themselves, such as
2525

26-
- **test/feature/ft/** - User-friendly *Behave* functional tests feature files
27-
- **test/tools/PTE/** - Performance Traffic Engine *(PTE)* tool and tests
28-
- **test/tools/OTE/** - Orderer Traffic Engine *(OTE)* tool and tests
26+
- *test/feature/ft/* - User-friendly **Behave** functional tests feature files
27+
- *test/tools/PTE/* - Performance Traffic Engine **(PTE)** tool and tests
28+
- *test/tools/OTE/* - Orderer Traffic Engine **(OTE)** tool and tests
2929

3030
Each testcase title should provide the test objective and a Jira FAB issue which can be referenced for more information. Test steps and specific details can be found in the summary comments of the test scripts themselves. Additional information can be found in the README files associated with the various test directories.
3131

3232
## How to Add New Tests to the Automated Test Suite
3333

34-
We love contributors! Anyone may add a new test to an existing test driver script, or even create a new tool and new test driver script. The steps for both scenarios are provided further below as *Alternative 1* and *Alternative 2*. First, a few things to note:
34+
We love contributors! Anyone may add a new test to an existing test driver script, or even create a new tool and new test driver script. The steps for both scenarios are provided further below as **Alternative 1** and **Alternative 2**. First, a few things to note:
3535

3636
- Before linking a test case into the CI automation tests, please merge your (tool and) testcase into gerrit, and create a Jira task, as follows:
3737

38-
1. First merge your tool and tests to gerrit in appropriate folders under **/path/to/fabric/test/**.
38+
1. First merge your tool and tests to gerrit in appropriate folders under */path/to/fabric/test/*.
3939
1. Of course, all tests must pass before being submitted. We do not want to see any false positives for test case failures.
4040
1. To integrate your new tests into the CI automation test suite, create a new Jira task FAB-nnnn for each testcase, and use 'relates-to' to link it to epic FAB-3770.
41-
1. You will this new Jira task to submit a changeset to gerrit, to invoke your testcase from a driver script similar to **/path/to/fabric/test/regression/daily/test_example.py**. In the comments of the gerrit merge request submission, include the
41+
1. You will this new Jira task to submit a changeset to gerrit, to invoke your testcase from a driver script similar to */path/to/fabric/test/regression/daily/Example.py*. In the comments of the gerrit merge request submission, include the
4242
- Jira task FAB-nnnn
4343
- the testcase title and objective
4444
- copy and fill in the template from Jira epic FAB-3770
45-
1. Follow all the steps below in either *Alternative*, and then the test will be executed automatically as part of the next running of the CI daily test suite. The results will show up on the daily test suite display board - which can be viewed by following the link at the top of this page.
45+
1. Follow all the steps below in either **Alternative**, and then the test will be executed automatically as part of the next running of the CI daily test suite. The results will show up on the daily test suite display board - which can be viewed by following the link at the top of this page.
4646

47-
#### Why Test Output Format Must Be *xml* and How to Make It So
47+
#### Why Test Output Format Must Be **xml** and How to Make It So
4848

49-
The Continuous Improvement (CI) team utilizes a Jenkins job to execute the full test suite, **runDailyTestSuite.sh**. The CI job consumes xml output files, creates reports, and displays them. *Note: When adding new scripts that generate new xml files, if you do not see the results displayed correctly, please contact us on [Rocket.Chat channel #fabric-ci](https://chat.hyperledger.org).* For this reason, we execute tests in one of the following ways:
49+
The Continuous Improvement (CI) team utilizes a Jenkins job to execute the full test suite, *runDailyTestSuite.sh*. The CI job consumes xml output files, creates reports, and displays them. **Note:** When adding new scripts that generate new xml files, if you do not see the results displayed correctly, please contact us on [Rocket.Chat channel #fabric-ci](https://chat.hyperledger.org). For this reason, we execute tests in one of the following ways:
5050

51-
1. Invoke the individual testcase from within a test driver script in **regression/daily/**. There are many examples here, such as **test_example.py** and **test_pte.py**. These test driver scripts are basically wrappers written in python, which makes it easy to produce the desired junitxml output format required for displaying reports. This method is useful for almost any test language, including bash, tool binaries, and more. More details are provided below explaining how to call testcases from within a test driver script. Here we show how simple it is to execute the test driver and all the testcases within it. *Note: File 'example_results.xml' will be created, containing the test output.*
51+
1. Invoke the individual testcase from within a test driver script in *regression/daily/*. There are many examples here, such as *Example.py* and *systest_pte.py*. These test driver scripts are basically wrappers written in python, which makes it easy to produce the desired junitxml output format required for displaying reports. This method is useful for almost any test language, including bash, tool binaries, and more. More details are provided below explaining how to call testcases from within a test driver script. Here we show how simple it is to execute the test driver and all the testcases within it. **Note:** File *results_sample.xml* will be created, containing the sample testcases output.
5252

5353
```
5454
cd /path/to/fabric/test/regression/daily
55-
py.test -v --junitxml example_results.xml ./test_example.py
55+
py.test -v --junitxml example_results.xml ./Example.py
5656
```
5757

58-
1. Execute 'go test', and pipe the output through tool github.com/jstemmer/go-junit-report to convert to xml. *Note: In the example shown, file 'results.xml' will be created with the test output.*
58+
1. Execute 'go test', and pipe the output through tool github.com/jstemmer/go-junit-report to convert to xml. **Note:** In the example shown, file 'results.xml' will be created with the test output.
5959

6060
```
6161
cd /path/to/fabric/test/tools/OTE
@@ -67,14 +67,14 @@ The Continuous Improvement (CI) team utilizes a Jenkins job to execute the full
6767

6868
### Alternative 1: Add a test using an existing tool and test driver script
6969

70-
To add another test using an existing tool (such as **PTE**), simply add a test inside the existing test driver (such as **test_pte.py**). It is as simple as copying a block of ten lines and modify these things:
70+
To add another test using an existing tool (such as **PTE**), simply add a test inside the existing test driver (such as *systest_pte.py*). It is as simple as copying a block of ten lines and modify these things:
7171

7272
1. Insert the testcase in the correct test component class and edit the test name
7373
1. Edit the testcase description
7474
1. Edit the specified command and arguments to be executed
7575
1. Edit the asserted test result to be matched
7676

77-
Refer to **test_example.py** for a model to clone and get started quickly. The testcases should use the format shown in this example:
77+
Refer to *Example.py* for a model to clone and get started quickly. The testcases should use the format shown in this example:
7878

7979
```
8080
def test_FAB9876_1K_Payload(self):
@@ -94,9 +94,9 @@ Refer to **test_example.py** for a model to clone and get started quickly. The t
9494

9595
Adding a new test with a new tool involves a few more steps.
9696

97-
1. Create and merge a new tool, for example, **/path/to/fabric/test/tools/NewTool/newTool.sh**
98-
1. Create a new test driver script such as **/path/to/fabric/test/regression/daily/test_newTool.py**. Model it after others like **test_example.py**, found under driver scripts under **/path/to/test/regression/daily/** and **test/regression/weekly/**. Note: the filename must start with 'test_'.
99-
1. Add your new testcases to **test_newTool.py**. The testcases should use the following format. Refer also to the steps described in Alternative 1, above.
97+
1. Create and merge a new tool, for example, */path/to/fabric/test/tools/NewTool/newTool.sh*
98+
1. Create a new test driver script such as */path/to/fabric/test/regression/daily/newTool.py*. Model it after others like *Example.py*, found under driver scripts under */path/to/test/regression/daily/* and *test/regression/weekly/*.
99+
1. Add your new testcases to *newTool.py*. The testcases should use the following format. Refer also to the steps described in Alternative 1, above.
100100

101101
```
102102
class <component_feature>(unittest.TestCase):
@@ -109,10 +109,10 @@ Adding a new test with a new tool involves a few more steps.
109109
self.assertIn("<string from stdout of newTool that indicates PASS>", result)
110110
```
111111

112-
1. Edit **/path/to/test/regression/daily/runDailyTestSuite.sh** to run the new testcases. Add a new line, or append your new test driver scriptname **test_newTool.py** to an existing line:
112+
1. Edit */path/to/test/regression/daily/runDailyTestSuite.sh* to run the new testcases. Add a new line, or append your new test driver scriptname *newTool.py* to an existing line:
113113

114114
```
115-
py.test -v --junitxml results.xml test_example.py test_newTool.py
115+
py.test -v --junitxml results_newTool.xml newTool.py
116116
```
117117

118118
### How to Add a New Chaincode Test
+20-4
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,24 @@
11
#!/bin/bash
22

3-
echo "========== Example tests and PTE system tests..."
4-
py.test -v --junitxml results.xml test_example.py test_pte.py
3+
DAILYDIR="$GOPATH/src/github.com/hyperledger/fabric/test/regression/daily"
54

6-
echo "========== Chaincode tests..."
7-
chaincodeTests/runChaincodes.sh
5+
#echo "========== Sample Tests..."
6+
#py.test -v --junitxml results_sample.xml Example.py
87

8+
echo "========== System Test Performance Stress tests driven by PTE tool..."
9+
py.test -v --junitxml results_systest_pte.xml systest_pte.py
10+
11+
echo "========== Test Your Chaincode ..."
12+
# TBD - after changeset https://gerrit.hyperledger.org/r/#/c/9163/ is merged,
13+
# replace the previous 2 lines with this new syntax to run all the chaincode tests;
14+
# and when making this change we should also remove file chaincodeTests/runChaincodes.sh)
15+
#
16+
#cd $DAILYDIR/chaincodeTests/envsetup
17+
#py.test -v --junitxml ../../results_testYourChaincode.xml testYourChaincode.py
18+
19+
# TBD - after changeset https://gerrit.hyperledger.org/r/#/c/9251/ is merged,
20+
# and integrated with this, lines like these should be executed too:
21+
#echo "========== Ledger component performance tests..."
22+
#cd $DAILYDIR/ledgerperftests
23+
#py.test -v --junitxml results_perf_goleveldb.xml test_perf_goleveldb.py
24+
#py.test -v --junitxml results_perf_couchdb.xml test_perf_couchdb.py

test/regression/daily/test_pte.py test/regression/daily/systest_pte.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
######################################################################
33
# To execute:
44
# Install: sudo apt-get install python python-pytest
5-
# Run on command line: py.test -v --junitxml results.xml ./test_pte.py
5+
# Run on command line: py.test -v --junitxml results.xml ./systest_pte.py
66

77
import unittest
88
import subprocess
@@ -14,7 +14,7 @@
1414
### LEVELDB
1515
######################################################################
1616

17-
class LevelDB_Perf_Stress(unittest.TestCase):
17+
class Perf_Stress_LevelDB(unittest.TestCase):
1818
@unittest.skip("skipping")
1919
def test_FAB3808_TPS_Queries_1_Thread_TinyNtwk(self):
2020
'''
@@ -70,7 +70,7 @@ def test_FAB3835_TPS_Invokes_8_Thread_TinyNtwk(self):
7070
### COUCHDB
7171
######################################################################
7272

73-
class CouchDB_Perf_Stress(unittest.TestCase):
73+
class Perf_Stress_CouchDB(unittest.TestCase):
7474
@unittest.skip("skipping")
7575
def test_FAB3807_TPS_Queries_1_Thread_TinyNtwk(self):
7676
'''

test/regression/weekly/README.md

+14
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,17 @@
33

44
## Test Details
55
Refer to [../daily/README](../daily/README.md) Everything there is relevant.
6+
7+
## Execute all tests in this directory
8+
The tests are split into groups; each could be executed by different CI jobs in parallel.
9+
10+
```
11+
cd /path/to/fabric/test/regression/weekly
12+
```
13+
14+
```
15+
./runGroup1.sh
16+
./runGroup2.sh
17+
./runGroup3.sh
18+
```
19+

test/regression/weekly/runGroup1.sh

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
#!/bin/bash
2+
3+
######################################################################
4+
### Run one group of the tests in weekly test suite.
5+
6+
echo "========== Performance Stress PTE Scaleup tests"
7+
py.test -v --junitxml results_systest_pte_Scaleup.xml systest_pte.py -k Scaleup
8+

test/regression/weekly/runGroup2.sh

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
#!/bin/bash
2+
3+
######################################################################
4+
### Run one group of the tests in weekly test suite.
5+
6+
echo "========== Performance Stress PTE 12Hr test"
7+
py.test -v --junitxml results_systest_pte_12Hr.xml systest_pte.py -k TimedRun_12Hr
8+

test/regression/weekly/runGroup3.sh

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
#!/bin/bash
2+
3+
######################################################################
4+
### Run one group of the tests in weekly test suite.
5+
6+
echo "========== Performance Stress PTE 72Hr test"
7+
py.test -v --junitxml results_systest_pte_72Hr.xml systest_pte.py -k TimedRun_72Hr
8+

test/regression/weekly/test_pte.py test/regression/weekly/systest_pte.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
######################################################################
33
# To execute:
44
# Install: sudo apt-get install python python-pytest
5-
# Run on command line: py.test -v --junitxml results.xml ./test_pte.py
5+
# Run on command line: py.test -v --junitxml results.xml ./systest_pte.py
66

77
import unittest
88
import subprocess
@@ -14,7 +14,7 @@
1414
### COUCHDB
1515
######################################################################
1616

17-
class CouchDB_Perf_Stress(unittest.TestCase):
17+
class Perf_Stress_CouchDB(unittest.TestCase):
1818

1919
@unittest.skip("skipping")
2020
def test_FAB3820_TimedRun_12Hr(self):

0 commit comments

Comments
 (0)