Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • D dacapobench
  • Project information
    • Project information
    • Activity
    • Labels
    • Planning hierarchy
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 11
    • Issues 11
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 4
    • Merge requests 4
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Container Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar

WARNING! Access to this system is limited to authorised users only.
Unauthorised users may be subject to prosecution.
Unauthorised access to this system is a criminal offence under Australian law (Federal Crimes Act 1914 Part VIA)
It is a criminal offence to:
(1) Obtain access to data without authority. -Penalty 2 years imprisonment.
(2) Damage, delete, alter or insert data without authority. -Penalty 10 years imprisonment.
User activity is monitored and recorded. Anyone using this system expressly consents to such monitoring and recording.

To protect your data, the CISO officer has suggested users to enable 2FA as soon as possible.
Currently 2.7% of users enabled 2FA.

  • dacapo
  • anu-dev
  • dacapobench
  • Issues
  • #11

Closed
Open
Created Mar 20, 2018 by John Zhang@u5157779Maintainer

Managing large data sets

(Based on GitHub issue 144)

Goal

Rework the distribution model for the DaCapo suite and its internal structure to allow for very large data sets (including data in the GB range). A secondary goal is to reconsider other aspects of the distribution model and internal structure as part of this process.

Background

The current distribution model has a single fully-self-contained jar, with all libraries and data contained within the jar. This has the distinct advantage of extreme simplicity (from the users' POV). The user simply downloads the jar and types java -jar dacapo.jar <benchmark>. The rationale for this is clearly laid down in the DaCapo paper. Ease of use is a first-order principle because it encourages correct use and thus methodologically sound use, which is the overriding concern of the project. Complexity is antithetical to that goal.

However, future releases of DaCapo need to support very large data sets, and the above model will not scale to such large data sets, in part because the very large data set will have to be unpacked from the jar each time the jar is used. So we need to rethink the distribution model.

Proposal

Implement two packaging approaches and evaluate them both, before selecting one:

  1. Minimal change for user. Under this model, the user's experience of the dacapo jar is unchanged unless they use large data sets.
  • Advantage: only those who use very large data sets will notice any change at all.
  • Disadvantage: lack of uniformity between use of large data sets and all other data sets.
  1. Complete change. Under this model, all data (and possibly jars) will be packaged differently.
  • Advantage: uniform treatment of all benchmark sizes, and opportunity to move all data and jars out of existing jar.
  • Disadvantage: major change in use for all users

It should be straightforward to accommodate both approaches.

The requirements for external storage of any data or jar should be as follows:

  • The data/jars reside at one of a number of standard paths, or else are user-specified (if the user chose to use a non-standard path).
  • The benchmark harness will search standard paths (and the non-standard path if provided) and only prompt the user if the data/jars cannot be found. Once installed, the command line use of the suite should be as simple as it was before (it should be identical).
  • If the data/jars cannot be found, the harness will invite the user to install the data/jars at a prompt, and do the process automatically.

Requirements

We will need to define a standard file structure for the extra data/jars. That structure should have some coherence with the internal jar structure. The structure should be robust to the version changes and the reality that researchers may very well use multiple benchmark versions concurrently.

Edited May 26, 2018 by Steve Blackburn
Assignee
Assign to
Time tracking