|
@@ -1,4 +1,171 @@
|
1
|
1
|
sznqalibs
|
2
|
|
-======
|
|
2
|
+=========
|
|
3
|
+
|
|
4
|
+Collection of python libs developed for testing purposes.
|
|
5
|
+
|
|
6
|
+
|
|
7
|
+hoover
|
|
8
|
+------
|
|
9
|
+
|
|
10
|
+hoover is a testing framework built with following principles
|
|
11
|
+in mind:
|
|
12
|
+
|
|
13
|
+ * data-driven testing,
|
|
14
|
+ * easy test data definition (even with huge sets),
|
|
15
|
+ * helpful reporting.
|
|
16
|
+
|
|
17
|
+Typical use case is that you have a tested system, another
|
|
18
|
+"reference" system and knowledge about testing input. You then
|
|
19
|
+create drivers for both systems that will parse and prepare
|
|
20
|
+output in a way that it can be compared to eaach other.
|
|
21
|
+
|
|
22
|
+
|
|
23
|
+### Examples ###
|
|
24
|
+
|
|
25
|
+An example is worth 1000 words:
|
|
26
|
+
|
|
27
|
+ from sznqalibs import hoover
|
|
28
|
+
|
|
29
|
+
|
|
30
|
+ class BaloonDriver(hoover.TestDriver):
|
|
31
|
+ """
|
|
32
|
+ Object enclosing SUT or one of its typical use patterns
|
|
33
|
+ """
|
|
34
|
+
|
|
35
|
+ _get_data(self):
|
|
36
|
+ # now do something to obtain results from the SUT
|
|
37
|
+ # using self._argset dictionary
|
|
38
|
+ self.data['sentence'] = subprocess.check_output(
|
|
39
|
+ ['sut', self.args['count'], self.args['color']]
|
|
40
|
+ )
|
|
41
|
+
|
|
42
|
+ class OracleDriver(hoover.TestDriver):
|
|
43
|
+ """
|
|
44
|
+ Object providing Oracle (expected output) for test arguments
|
|
45
|
+ """
|
|
46
|
+
|
|
47
|
+ _get_data(self):
|
|
48
|
+ # obtain expected results, for example by asking
|
|
49
|
+ # reference implementation (or by reimplementing
|
|
50
|
+ # fraction of the SUT, e.g. only for the expected
|
|
51
|
+ # data)
|
|
52
|
+ self.data['sentence'] = ("%(count)s %(color)s baloons"
|
|
53
|
+ % self._args)
|
|
54
|
+
|
|
55
|
+ class MyTest(unittest.TestCase):
|
|
56
|
+
|
|
57
|
+ def test_valid(self):
|
|
58
|
+ # as alternative to defining each _args separately,
|
|
59
|
+ # Cartman lets you define just the ranges
|
|
60
|
+ argsrc = hoover.Cartman({
|
|
61
|
+ # for each parameter define iterator with
|
|
62
|
+ # values you want to combine in this test
|
|
63
|
+ 'count': xrange(100),
|
|
64
|
+ 'color': ['red', 'blue']
|
|
65
|
+ })
|
|
66
|
+ # regression_test will call both drivers once with
|
|
67
|
+ # each argument set, compare results and store results
|
|
68
|
+ # along with some statistics
|
|
69
|
+ tracker = hoover.regression_test(
|
|
70
|
+ argsrc=argsrc,
|
|
71
|
+ tests=[(operator.eq, OracleDriver, BaloonDriver)]
|
|
72
|
+ )
|
|
73
|
+ if tracker.errors_found():
|
|
74
|
+ print tracker.format_report()
|
|
75
|
+
|
|
76
|
+But that's just to get the idea. For a (hopefully) working
|
|
77
|
+example, look at doc/examples subfolder, there's a "calculator"
|
|
78
|
+implemented in Bash and Perl/CGI, and a *hoover* test that
|
|
79
|
+compares these two implementations to a Python implementation
|
|
80
|
+defined inside the test.
|
|
81
|
+
|
|
82
|
+
|
|
83
|
+### pFAQ (Potentially FAQ) ###
|
|
84
|
+
|
|
85
|
+The truth is that nobody asked any questions so far, so I can't
|
|
86
|
+honestly write FAQ (or even AQ, for that matter) ;). So at
|
|
87
|
+least I'll try to answer what I feel like people would ask:
|
|
88
|
+
|
|
89
|
+ * **Q:** What do you mean by implementing "reference", or
|
|
90
|
+ "oracle" driver? Am I supposed to re-implement the system?
|
|
91
|
+ Are you serious?
|
|
92
|
+
|
|
93
|
+ **A:** Yes, I am serious. But consider this:
|
|
94
|
+
|
|
95
|
+ First, not all systems are necessarily complicated. Take
|
|
96
|
+ GNU *cat*. All it does is print data. Open, write,
|
|
97
|
+ close. The added value is that it's insanely good at it.
|
|
98
|
+ However, your oracle driver does not need to be *that*
|
|
99
|
+ good. Even if it only was able to check the length or
|
|
100
|
+ MD5 of data, it would be more than nothing.
|
|
101
|
+
|
|
102
|
+ Also if you are creative enough, you can select the data
|
|
103
|
+ in a clever way so that you can develop tricks that can
|
|
104
|
+ help your driver on the way. For example you could
|
|
105
|
+ "inject" the data with hints on how the result should be.
|
|
106
|
+
|
|
107
|
+ Next, you don't need to actually re-implement the driver.
|
|
108
|
+ As
|
|
109
|
+ the most "brute" strategy, instead of using hoover to
|
|
110
|
+ generate the data, you might want to just go and generate
|
|
111
|
+ the data somehow manually (as you might have done it so
|
|
112
|
+ far), verify it and feed it to your drivers. including
|
|
113
|
+ expected results. This might not be the most viable option
|
|
114
|
+ for a huge set, but at least what *hoover* will give you is
|
|
115
|
+ the running and reporting engine.
|
|
116
|
+
|
|
117
|
+ Then there might be cases when the system actually *is*
|
|
118
|
+ trivial and you *can* re-implement it, but for some reason
|
|
119
|
+ you don't have a testing framework on the native platform.
|
|
120
|
+ For example, embedded system, or a library that needs to
|
|
121
|
+ be in specific language like bash. In case it has trivial
|
|
122
|
+ parts, you can test them in *hoover* and save yourself
|
|
123
|
+ some headache with maintenance.
|
|
124
|
+
|
|
125
|
+ Last, but not least--this was actually the story behind
|
|
126
|
+ *hoover* being born--there are cases whan you already
|
|
127
|
+ *have* reference implementation and new implementation, you
|
|
128
|
+ just need to verify that behavior is the same. So you just
|
|
129
|
+ wrap both systems in drivers, tweak them so that they can
|
|
130
|
+ return the same data (if they already don't) or at least
|
|
131
|
+ data you can write a comparison function for, squeeze them
|
|
132
|
+ all into `hoover.regression_test` and hit the big button.
|
|
133
|
+ Note that you can even have 1 reference driver and N SUT
|
|
134
|
+ drivers, which can save you kajillions of machine seconds
|
|
135
|
+ if your old library is slow or resource-hungry but you have
|
|
136
|
+ more ports of the new one.
|
|
137
|
+
|
|
138
|
+ As a bonus, note that *hoover* can also provide you with
|
|
139
|
+ some performance stats. Well, there's absolutely no intent
|
|
140
|
+ to say that this is a proper performance measurement tool
|
|
141
|
+ (it's actually been designed to assess performance of the
|
|
142
|
+ drivers), but on the other hand, it comes with the package,
|
|
143
|
+ so it might be useful for you.
|
|
144
|
+
|
|
145
|
+ * **Q:** Is it mature?
|
|
146
|
+
|
|
147
|
+ **A:** No and a tiny yes.
|
|
148
|
+
|
|
149
|
+ Yes, because it has already been used in real environment
|
|
150
|
+ and it succeeded. But then again, it has been deployed by
|
|
151
|
+ author, and he has no idea if that's actually doable for
|
|
152
|
+ any sane person. You are more than welcome to try it and
|
|
153
|
+ provide me with feedback and I can't provide any kind of
|
|
154
|
+ guarranteees whatsoever.
|
|
155
|
+
|
|
156
|
+ No, because there are parts that are still far from being
|
|
157
|
+ polished, easy to use or even possible to understand.
|
|
158
|
+ (Heck, at this moment even I don't understand what `RuleOp`
|
|
159
|
+ is or was for :D). And there are probably limitations that
|
|
160
|
+ could be removed.
|
|
161
|
+
|
|
162
|
+ That said, the code is not a complete utter holy mess,
|
|
163
|
+ though.
|
|
164
|
+
|
|
165
|
+ But the API **will** change. Things will be re-designed
|
|
166
|
+ and some even removed or split to other modules.
|
|
167
|
+
|
|
168
|
+ My current "strategy", however, is to do this on the run,
|
|
169
|
+ probably based on real experience when trying to use it in
|
|
170
|
+ real testing scenarios.
|
3
|
171
|
|
4
|
|
-collection of python libs developed for testing purposes
|