mu-perf-benchmarks issueshttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues2017-11-21T09:39:55+11:00https://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/22Size of quicksort micro benchmark2017-11-21T09:39:55+11:00Zixian CaiSize of quicksort micro benchmarkCurrently, we read the numbers off from the bundled file, and then put them in global cells/static/whatever. It would be ideal to change the size of quicksort test.Currently, we read the numbers off from the bundled file, and then put them in global cells/static/whatever. It would be ideal to change the size of quicksort test.https://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/21Use temporary files instead of IPC2017-11-21T10:09:42+11:00Zixian CaiUse temporary files instead of IPCCurrently, we are piping stdin/stdout to read the output from callbacks, but we may run into scheduling issues, which will introduce the noise into the measurement.Currently, we are piping stdin/stdout to read the output from callbacks, but we may run into scheduling issues, which will introduce the noise into the measurement.https://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/20Sanity check for mubench2017-11-21T09:37:37+11:00Zixian CaiSanity check for mubenchWe'd like to have a sanity check inside the script. For example, we want to check the option for CPU governor, i.e. whether CPU frequency scaling is turned off or not. We may also want to add check for hyper-threading for example.
The b...We'd like to have a sanity check inside the script. For example, we want to check the option for CPU governor, i.e. whether CPU frequency scaling is turned off or not. We may also want to add check for hyper-threading for example.
The bottom line is before running a script, we want to make sure whether our assumptions are violated or not.https://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/19Add extra field to taskset for web interface2017-08-25T10:47:40+10:00Zixian CaiAdd extra field to taskset for web interface* [ ] Specify the implementation we cares about* [ ] Specify the implementation we cares aboutZixian CaiZixian Caihttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/18WASM testing dosn't work on doge2017-07-18T15:13:40+10:00Isaac Garianoisaac@ecs.vuw.ac.nzWASM testing dosn't work on dogeTrying to mubench fib (using the `example/test_rpy_fib.yml` file in master) dosn't work on `doge`:
I get the following error:
```
[INFO] 2017-07-17 20:37:17,164 taskset [c_wasm_O3] compiling...
[INFO] 2017-07-17 20:37:17,165 wasm emcc -...Trying to mubench fib (using the `example/test_rpy_fib.yml` file in master) dosn't work on `doge`:
I get the following error:
```
[INFO] 2017-07-17 20:37:17,164 taskset [c_wasm_O3] compiling...
[INFO] 2017-07-17 20:37:17,165 wasm emcc -s WASM=1 -O3 -I/home/isaacg/mu-perf-benchmarks/mubench/suite/callbacks/wasm -o example/fib-c_wasm_O3.js /home/isaacg/mu-perf-benchmarks/mubench/suite/micro/fib/fib.c /home/isaacg/mu-perf-benchmarks/mubench/suite/callbacks/wasm/cb_clock.c
[INFO] 2017-07-17 20:37:20,764 taskset [c_wasm_O3] FAILED
[CRITICAL] 2017-07-17 20:37:20,764 taskset Executing 'emcc -s WASM=1 -O3 -I/home/isaacg/mu-perf-benchmarks/mubench/suite/callbacks/wasm -o example/fib-c_wasm_O3.js /home/isaacg/mu-perf-benchmarks/mubench/suite/micro/fib/fib.c /home/isaacg/mu-perf-benchmarks/mubench/suite/callbacks/wasm/cb_clock.c' failed.
[INFO] 2017-07-17 20:37:20,764 taskset [c_wasm_O3] error output written to example/c_wasm_O3.log
```
And example/c_wasm_O3.log contains:
```
WARNING root: LLVM version appears incorrect (seeing "4.0.0", expected "3.2")
INFO root: (Emscripten: Running sanity checks)
WARNING root: -I or -L of an absolute path "-I/home/isaacg/mu-perf-benchmarks/mubench/suite/callbacks/wasm" encountered. If thi s is to a local system header/library, it may cause problems (local system files make sense for compiling natively on your syste m, but not necessarily to JavaScript). Pass '-Wno-warn-absolute-paths' to emcc to hide this warning.
INFO root: =======================================
INFO root: bootstrapping relooper...
INFO root: bootstrap phase 1
/home/isaacg/clang/bin/lli: error creating EE: No available targets are compatible with this triple.
FAIL: Running the generated program failed!
Traceback (most recent call last):
File "/usr/share/emscripten/emcc", line 1864, in <module>
final = shared.Building.emscripten(final, append_ext=False, extra_args=extra_args)
File "/usr/share/emscripten/tools/shared.py", line 1276, in emscripten
assert os.path.exists(filename + '.o.js') and len(open(filename + '.o.js', 'r').read()) > 0, 'Emscripten failed to generate .js: ' + str(compiler_output)
AssertionError: Emscripten failed to generate .js:
ERROR root: bootstrapping relooper failed. You may need to manually create relooper.js by compiling it, see src/relooper/emsc ripten
Traceback (most recent call last):
File "/usr/share/emscripten/emscripten.py", line 1352, in <module>
_main(environ=os.environ)
File "/usr/share/emscripten/emscripten.py", line 1340, in _main
temp_files.run_and_clean(lambda: main(
File "/usr/share/emscripten/tools/tempfiles.py", line 39, in run_and_clean
return func()
File "/usr/share/emscripten/emscripten.py", line 1348, in <lambda>
DEBUG_CACHE=DEBUG_CACHE,
File "/usr/share/emscripten/emscripten.py", line 1226, in main
shared.Building.ensure_relooper(relooper)
File "/usr/share/emscripten/tools/shared.py", line 1521, in ensure_relooper
1/0
ZeroDivisionError: integer division or modulo by zero
Traceback (most recent call last):
File "/usr/bin/emcc", line 1864, in <module>
final = shared.Building.emscripten(final, append_ext=False, extra_args=extra_args)
File "/usr/share/emscripten/tools/shared.py", line 1276, in emscripten
assert os.path.exists(filename + '.o.js') and len(open(filename + '.o.js', 'r').read()) > 0, 'Emscripten failed to generate .js: ' + str(compiler_output)
AssertionError: Emscripten failed to generate .js:
```https://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/17Implement performance counter callback2018-09-14T20:32:47+10:00John ZhangImplement performance counter callback## Problem Description
The state-of-the-art performance measurement framework uses [perfmon2](http://perfmon2.sourceforge.net) to get the performance counter readings. This is used in the [probe framework](http://squirrel.anu.edu.au/hg/a...## Problem Description
The state-of-the-art performance measurement framework uses [perfmon2](http://perfmon2.sourceforge.net) to get the performance counter readings. This is used in the [probe framework](http://squirrel.anu.edu.au/hg/all/shared/probes/file/74661c04458c/native/perf_event/perf_event_agent.c) used by this lab in the past.
It is thus desirable to implement it in this framework.
Note: this task should come after the refactoring of all language using callbacks written in C via FFI.John ZhangJohn Zhanghttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/16Correct timing API calls2018-09-14T20:32:47+10:00John ZhangCorrect timing API calls## Problem Description
`clock()` gives `1�µs` accuracy on macOS, and `10µs` on Linux. This is not a good function to use.
On macOS, `clock_gettime()` doesn't seem to yield `1ns` accuracy as on Linux, but rather `1µs`. It is thus also n...## Problem Description
`clock()` gives `1�µs` accuracy on macOS, and `10µs` on Linux. This is not a good function to use.
On macOS, `clock_gettime()` doesn't seem to yield `1ns` accuracy as on Linux, but rather `1µs`. It is thus also non-desirable.
It seems on macOS it's better to use the function `mach_absolute_time()` as suggested [here](https://stackoverflow.com/questions/5167269/clock-gettime-alternative-in-mac-os-x). On Linux `clock_gettime()` is still good.
## Tasks
Use `clock_gettime()` on Linux and `mach_absolute_time()` on macOS.
This can be achieved via `#ifdef` preprocessor statements.
However this could be difficult to implement on Mu callback. Thus it may be desirable to refactor Mu callback to perform `CCALL` on the callbacks implementations in C. In fact, it may be desirable to refactor the whole framework so that the callbacks are written in C, and all other languages call the callback functions via its on FFI.John ZhangJohn Zhanghttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/15Clock callback issue for Mu IR and wasm2018-09-14T20:32:48+10:00Zixian CaiClock callback issue for Mu IR and wasmBackground: we run fib benchmarks on different languages/implementations.
First problem I noticed is that when we use `clock_getttime` and compile through wasm toolchain, we always get `0.000000` as reading. Use `clock` instead of `cl...Background: we run fib benchmarks on different languages/implementations.
First problem I noticed is that when we use `clock_getttime` and compile through wasm toolchain, we always get `0.000000` as reading. Use `clock` instead of `clock_gettime` solved the problem. A possible cause is that wasm environment doesn't have access to some low-level things.
Then, we found that around 30% of the time, the executable produced by feeding handwritten Mu IR to Zebu gives `0.000000`. Following is the debug attempts:
- Changing `clock_gettime` to `clock` doesn't solve the problem
- John used Holstein and the problem could not be reproduced
- Yi used lldb with conditional breakpoint and found that sometimes even when the register containing the result is non-zero, we still got `0.000000` as reading.Yi LinYi Linhttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/14Multiple iterations of fib are optimized way2018-09-14T20:32:48+10:00Zixian CaiMultiple iterations of fib are optimized wayhttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/blob/master/mubench/suite/micro/fib/fib.c
Under `clang -O3` , using scaling factor of 1 or 10 doesn't make much difference.https://gitlab.anu.edu.au/mu/mu-perf-benchmarks/blob/master/mubench/suite/micro/fib/fib.c
Under `clang -O3` , using scaling factor of 1 or 10 doesn't make much difference.https://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/13incorrect type in mu callbacks2018-09-14T20:32:48+10:00Yi Linincorrect type in mu callbacksThe following instruction appears in `cb_init` from mu call backs: <br/>
> `(int<1>(%m.c.v.b.cmpres #1097) = SGT int<32>(%m.c.v.b.slen #1096) int<64>(0))`<br/>
It tries to compare int32 with int64.The following instruction appears in `cb_init` from mu call backs: <br/>
> `(int<1>(%m.c.v.b.cmpres #1097) = SGT int<32>(%m.c.v.b.slen #1096) int<64>(0))`<br/>
It tries to compare int32 with int64.John ZhangJohn Zhanghttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/12Micro exception benchmark produces an error when run through mubench2018-09-14T20:32:48+10:00Isaac Garianoisaac@ecs.vuw.ac.nzMicro exception benchmark produces an error when run through mubenchWhen I run my new modified mubench through:
I get the following output:
```
[INFO] 2017-06-17 14:11:28,512 local Constructing a LocalRevision
[INFO] 2017-06-17 14:11:28,536 local Running tasks specified in file
[INFO] 2017-06-17 1...When I run my new modified mubench through:
I get the following output:
```
[INFO] 2017-06-17 14:11:28,512 local Constructing a LocalRevision
[INFO] 2017-06-17 14:11:28,536 local Running tasks specified in file
[INFO] 2017-06-17 14:11:28,537 taskset [except_rpyc] start task
[INFO] 2017-06-17 14:11:28,537 __init__ Running pypy /home/isaacg/mu-client-pypy/rpython/bin/rpython --backend=c -O3 --no-shared --output=example/targetexcept-c /home/isaacg/mu-perf-benchmarks/mubench/suite/micro/except/targetexcept.py ClockCallback
[INFO] 2017-06-17 14:12:59,388 __init__ Running example/targetexcept-c 6 1 2000 1
[INFO] 2017-06-17 14:12:59,688 taskset [except_rpyc] FAILED
[CRITICAL] 2017-06-17 14:12:59,689 taskset Executing 'example/targetexcept-c 6 1 2000 1' failed.
[INFO] 2017-06-17 14:12:59,690 taskset [except_rpyc] error output written to example/except_rpyc.log
[INFO] 2017-06-17 14:12:59,690 taskset [except_rpymu_zebu] start task
[INFO] 2017-06-17 14:12:59,691 __init__ Running pypy /home/isaacg/mu-client-pypy/rpython/bin/rpython --backend=mu --mu-impl=zebu --mu-suplibdir=example -O3 --no-shared --output=example/targetexcept-mu /home/isaacg/mu-perf-benchmarks/mubench/suite/micro/except/targetexcept.py ClockCallback
[INFO] 2017-06-17 14:13:48,812 __init__ Running example/targetexcept-mu 6 1 2000 1
[INFO] 2017-06-17 14:13:48,913 taskset [except_rpymu_zebu] FAILED
[CRITICAL] 2017-06-17 14:13:48,913 taskset Executing 'example/targetexcept-mu 6 1 2000 1' failed.
[INFO] 2017-06-17 14:13:48,914 taskset [except_rpymu_zebu] error output written to example/except_rpymu_zebu.log
[INFO] 2017-06-17 14:13:48,914 local Generating report, compare?: False
[INFO] 2017-06-17 14:13:48,915 utils Going through pipelines to process the report
[INFO] 2017-06-17 14:13:48,915 utils [('mubench.models.pipeline.LogOutputPipeline', 42)]
[INFO] 2017-06-17 14:13:48,916 pipeline Report for Revision: LocalRevision(example/test_except.yml)
```
The contents of the mentioned log files:
example/except_rpyc.log:
```
---------------- stdout ----------------
---------------- stderr ----------------
RPython traceback:
File "implement.c", line 84, in main
File "rpython_rtyper_lltypesystem.c", line 456, in ll_int__rpy_stringPtr_Signed
Fatal RPython error: ValueError
```
example/except_rpymu_zebu.log:
```
---------------- stdout ----------------
---------------- stderr ----------------
Caught exception: <ValueError object at 0x86f894>
```
When I run the programs directly (e.g. typing `example/targetexcept-c 6 1 2000 1` into my shell), I get no error message and they behave as expected.Isaac Garianoisaac@ecs.vuw.ac.nzIsaac Garianoisaac@ecs.vuw.ac.nzhttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/11Allow comparison between tasks2018-09-14T20:32:48+10:00Zixian CaiAllow comparison between tasksWe can add an attribute to the yaml config file so that we can specify tasks to compare. Or this can also be a command line argument.We can add an attribute to the yaml config file so that we can specify tasks to compare. Or this can also be a command line argument.Zixian CaiZixian Caihttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/10Allow easy repetitions of benchmarks2018-09-14T20:32:48+10:00Isaac Garianoisaac@ecs.vuw.ac.nzAllow easy repetitions of benchmarksCurrently if I wan't to rerun a benchmark but with different parameters (iterations and other args to be passed to the benchmark program) i have to edit the '.yml' file and rerun mubench.
This is problematic for two reasons, one every ex...Currently if I wan't to rerun a benchmark but with different parameters (iterations and other args to be passed to the benchmark program) i have to edit the '.yml' file and rerun mubench.
This is problematic for two reasons, one every execution of mubench recompile (which takes a while) and frequent editing of the .yml is tedious and time consuming.
As such I have two suggestions:
* Make mubench use some kind of (intelligent?) mechanism to determine whether to recompile (e.g. like make use), but have an option to force recompilation.
* Allow parsing extra arguments to mubench that get forwarded to the program (maybe have a sentinel value like '-' that will force it to read the corresponding parameter from the '.yml' file
As an example you could then do:
`python3 mubench local example/test_except.yml -r - - 200`
Where the '-r' indicates to forcefully recompile, and the arguments passed to the program are then `a b 200` where a and b are the arguments that would have been passed if you didn't specify any (so b is read from the .yml file and 'a' is some kind of clock id??)https://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/9Improve Display of results2018-09-14T20:32:48+10:00Isaac Garianoisaac@ecs.vuw.ac.nzImprove Display of resultsTo make the display of the results to be more readable, I have a few suggestions:
* Don't display times in exponential notation (or if you must use a consistent exponent for all timings)
* Have a consistent (and deterministic) ordering o...To make the display of the results to be more readable, I have a few suggestions:
* Don't display times in exponential notation (or if you must use a consistent exponent for all timings)
* Have a consistent (and deterministic) ordering of the display of trimming data (for e.g. if I have to tests in my test.yaml file sometimes the output for the timing results for one will go before the output of the other, and sometimes it will be the other way round), I suggest changing it to always display results in the order the tests are in in the file.
* In the case of running 2 tests, it would be useful if you had an automatic ratio display (displaying how much slower one test is compared to another).https://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/8Eliminate path related environment variable2018-09-14T20:32:48+10:00John ZhangEliminate path related environment variable## Problem Description
Currently environment variables such as $PYPY_MU, $MU_ZEBU need to be explicitly defined in the config.
This may not be very convenient when cloning the repository and run it on a different machine.
It is perhaps ...## Problem Description
Currently environment variables such as $PYPY_MU, $MU_ZEBU need to be explicitly defined in the config.
This may not be very convenient when cloning the repository and run it on a different machine.
It is perhaps better to require the user to define these environment variables globally.John ZhangJohn Zhanghttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/7Removing the dependency on NumPy, SciPy and Panda2018-09-14T20:32:48+10:00John ZhangRemoving the dependency on NumPy, SciPy and Panda## Problem Description
Currently the statistics are produced using NumPy, SciPy and Panda.
For things like mean, standard deviation and median this should be unnecessary.
It creates unnecessary dependency on large packages.
Even for stu...## Problem Description
Currently the statistics are produced using NumPy, SciPy and Panda.
For things like mean, standard deviation and median this should be unnecessary.
It creates unnecessary dependency on large packages.
Even for student's t test it shouldn't be difficult to implement the algorithm by hand.
## Task Description
- [ ] replace existing functions with simple functions that calculate mean, standard deviation and median.
- [ ] Remove dependency on NumPy, SciPy and Panda.Zixian CaiZixian Caihttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/5Iteration over process invocations2018-09-14T20:32:48+10:00John ZhangIteration over process invocations## Problem Description
Currently the measurement iteration is done *inside* the program following the pattern below:
```c
for (int i = 0; i < iterations; i ++) {
cb_begin()
// work
cb_end()
}
```
However, this method is prun...## Problem Description
Currently the measurement iteration is done *inside* the program following the pattern below:
```c
for (int i = 0; i < iterations; i ++) {
cb_begin()
// work
cb_end()
}
```
However, this method is prune to disruptions and noise from the host machine. For example, task B's measurement can be affected by other processes while task A's measurement was not.
Thus it would be better to iterate over tasks, having each process invocation producing one data point, thus having the tasks equally affected by other processes.
## Task Description
- [x] Redesign config file specification, taking out the benchmark specification and iterations, like:
```YAML
fib:
benchmark:
# benchmark spec
iterations: 100 # 100 data points
tasks:
rpyc: # global name: fib_rpyc
language: # ...
compiler: # ...
rpymu: # ...
```
- [x] Separate task compilation and task execution, so that each task is only compiled once.
- [x] Add outside-process time measurement, as well as inside-process measurement.John ZhangJohn Zhanghttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/4Changing documentation format from Markdown to reStructuredText2018-09-14T20:32:48+10:00John ZhangChanging documentation format from Markdown to reStructuredText## Problem Description
Currently all documentation are written in Markdown format.
As @zcai01 pointed out, reStructuredText could be a better idea.
## Task Description
Migrate the documentation from Markdown to reStructuredText.## Problem Description
Currently all documentation are written in Markdown format.
As @zcai01 pointed out, reStructuredText could be a better idea.
## Task Description
Migrate the documentation from Markdown to reStructuredText.https://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/3Predefined tasks2018-09-14T20:32:48+10:00Zixian CaiPredefined tasksJohn ZhangJohn Zhanghttps://gitlab.anu.edu.au/mu/mu-perf-benchmarks/-/issues/2Documentation/README2018-09-14T20:32:48+10:00Zixian CaiDocumentation/README* [ ] Introduction
* [ ] Usage
* [ ] Glossary
* [ ] Configuration* [ ] Introduction
* [ ] Usage
* [ ] Glossary
* [ ] Configuration