This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Documentation

The documentation is in a very early stage and some parts might be outdated.

Functionality

The Divekit supports a testing concept which is based on the following components:

Gitlab is the platform where all exercises are managed. By uploading solutions on Gitlab, learners can solve exercises. Gitlab CI is also used to automatically execute tests. Exercises of specific milestones can be managed in the form of Gitlab groups. The Origin project is a normal project whose structure depends on the programming language used in the exercises. In addition to predefined program fragments the project can also contain unit tests, solutions for exercises and configuration files for involved tools. Based on this project a code project and a test project are generated for each learner. With this procedure files are filtered, which are not supposed to be seen by learners (e.g.: solutions). The Code Project is generated from the Origin Project. It contains everything needed for a learner to complete the exercise. When a learner has completed an exercise or wants to save an intermediate state, the learner can upload the progress using Git. Usually the test project contains all the files that the code project contains. Furthermore, additional files which are not intended for learners (e.g. hidden files) can be stored here. So the main task of the test project is therefore to encapsulate these files. In addition, the test project executes automatic tests using Gitlab CI and generates a test page afterwards. For modularization reasons, all reusable program code of automatic tests is maintained within a test library. The library depends on a certain programming language and is referenced by the code project and the test project. The test page is generated by the test project individually for each learner and displays the test results of each learner in the form of a static page. Since each learner has a separate code project, test project and test page per milestone within Gitlab an overview page is generated that assigns each learner accordingly. Within Gitlab there exists no automatism which generates the appropriate repositories for each learner. To meet these requirements the Divekit was developed. The Divekit generates the appropriate projects for each learner based on configuration files using the Gitlab API. Afterwards an overview page is generated.

Of course the most important role in this concept is played by the learners and teachers who interact with the above components. When a new milestone should be prepared an instructor creates a new exercise in the form of an Origin Project. He uses the Divekit to automatically generate the projects for the learners. To test automations provided by the Divekit in advance, the instructor can use the file system instead of the Gitlab API. After the projects are generated for the learners they can solve the exercises which are located in the code projects. After each upload they get quick feedback generated by automatic tests which are executed. Afterwards the results are summarized on the test page. During and also after a milestone teachers can look up the results of specific learners using the overview page. Furthermore, by looking at a specific learners test page they can see how many tests have been successfully completed.

In addition to these main tasks of the Divekit toolset, there is a lot of optional tooling. An overview of all available tools can be found in the following list:

1 - Getting Started

Example Repositories referenced in this guide

https://git.st.archi-lab.io/staff/divekit-example

1.1 - CI/CD Pipeline

Description of all used pipelines.

The documentation is not yet written. Feel free to add it yourself ;)

1.2 - Testrepo

In the test repo, various functionalities of the student’s source code can be tested. This pages decribes the various functionalities with simple examples

The documentation is not yet written. Feel free to add it yourself ;)

Testing Package structure

ExampleFile

static final String PACKAGE_PREFIX = "thkoeln.divekit.archilab.";

@Test
public void testPackageStructure() {
    try {
        Class.forName(PACKAGE_PREFIX + "domainprimitives.StorageCapacity");
        Class.forName(PACKAGE_PREFIX + "notebook.application.NotebookDto");
        Class.forName(PACKAGE_PREFIX + "notebook.application.NotebookController");
        Class.forName(PACKAGE_PREFIX + "notebook.domain.Notebook");
        // using individualization and the variableExtensionConfig.json this could be simplified to
        // Class.forName("$entityPackage$.domain.$entityClass$");
        // ==> Attention: If used, the test can't be tested in the orgin repo itself
    } catch (ClassNotFoundException e) {
        Assertions.fail("At least one of your entities is not in the right package, or has a wrong name. Please check package structure and spelling!");
    }
}

Testing REST Controller

ExampleFile

@Autowired
private MockMvc mockMvc;

@Test
public void notFoundTest() throws Exception {
    mockMvc.perform(get("/notFound")
        .accept(MediaType.APPLICATION_JSON))
        .andDo(print())
        .andExpect(status().isNotFound());
}

@Transactional
@Test
public void getPrimeNumberTest() throws Exception {
    final Integer expectedPrimeNumber = 13;
    mockMvc.perform(get("/primeNumber")
        .accept(MediaType.APPLICATION_JSON))
        .andDo(print())
        .andExpect(status().isOk())
        .andExpect(jsonPath("$", Matchers.is(expectedPrimeNumber))).andReturn();
}

Testing …

2 - Access Manager

Manages repository rights for students and supervisors on Gitlab repositories.

The documentation is not yet written. Feel free to add it yourself ;)

3 - Access Manager 2.0

Rewritten (in python) and improved version of the original access manager. Used to change the access rights of repository members in a GitLab group based on a whitelist.

Setup & Run

  1. Install Python 3 or higher
  2. Install python-GitLab using pip install python-gitlab
  3. Check the file config.py to configure the tool
  4. run AccesManager.py using python AccessManager

Configuration

OptionPurpose
GIT_URLURL of your GitLab Server
AUTH_TOKENYour personal GitLab access token
GROUP_IDId of the GitLab Group you want to modify
ACCESS_LEVELAccess level you want to provide. 1 for Maintainer, 0 for Guest
STUDENTSList of users to modify. Users not in this List will be ignored.

4 - Automated Repo Setup

The automated-repo-setup is the heath of divekit. It implements the core set of features which are used to automatically provide programming exercises in terms of repositories on the platform Gitlab. Optionally, exercises can be individualized using the integrated mechanism for individualization.

Setup & Run

  1. Install NodeJs (version >= 12.0.0) which is required to run this tool. NodeJs can be acquired on the website nodejs.org.

  2. To use this tool you have to clone the repository to your local drive.

  3. This tool uses several libraries in order to use the Gitlab API etc. Install these libraries by running the command npm install in the root folder of this project.

  4. Local/GitLab usage

    1. For local use only
      1. Copy the origin repository into the folder resources/test/input. If this folder does not exist, create the folder test inside the resources folder and then create the folder input in the newly created folder test
      2. The generated repositories will be located under resources/test/output after running the tool
    2. For use with Gitlab:
      1. Navigate to https://git.st.archi-lab.io/profile/personal_access_tokens (if you are using the gitlab instance * git.st.archi-lab.io*) and generate an Access Token / API-Token in order to get access to the gitlab api
      2. Copy the Access Token
      3. Rename the file .env.example to .env
      4. Open .env and replace YOUR_API_TOKEN with the token you copied.
      5. Configure the source repository and target group in the config
  5. Before you can configure or run this tool you have to copy all the example config files inside the * resources/examples/config* folder to the resources/config folder in order to create your own config files. If you want to change the standard behaviour you can configure this tool by editing the configs.

  6. To run the application navigate into the root folder of this tool and run npm start. The repositories will now be generated.

Configuration

Before you can configure this tool you have to copy all the relevant example config files inside the * resources/examples/config* folder to the resources/config folder in order to create your own config files. If you want to change the standard behaviour you can configure this tool by editing the configs.

The Divekit uses two types of configs, technical configs and domain specific configs. The contents of technical configs often change each time repositories are generated using the Divekit. Therefore these type of configs are located in the resources/config folder of the Divekit. Domain configs do not change each time new repositories are generated because they depend on the type of exercise and the corresponding domain. As a result these configs should be contained in the Origin Project (they dont have to). In the following the different configs, their purpose and their type are listed:

ConfigPurposeType
repositoryConfigConfigure the process of repository generationTechnical Config
originRepositoryConfigConfigure solution deletion and variable warningsDomain Config
variationsConfigConfigure different types of variationsDomain Config
variableExtensionsConfigConfigure different extensions which are used to generate derivated variablesDomain Config
relationsConfigConfigure properties of relations which are used to generate relation variablesDomain Config

Features

Repository Generation

If the Divekit is run the tool will generate repositories based on the configured options defined in the * repositoryConfig*. The following example shows the relevant options where each option is explained in short:

{
    "general": {
        # Decide wheather you just want to test locally. If set to false the Gitlab api will be used
        "localMode": true,

        # Decide wheather test repositories should be generated as well. If set to false there will only be generated one code repository for each learner
        "createTestRepository": true,

        # Decide wheather the repositories should be randomized using the *variationsConfig.json*
        "variateRepositories": true,

        # Decide wheather the existing solution should be deleted using the SolutionDeleter
        "deleteSolution": false,

        # Activate warnings which will warn you if there are suspicious variable values remaining after variable placeholders have been replaced
        "activateVariableValueWarnings": true,

        # Define the number of concurrent repository generation processes. Keep in mind that high numbers can overload the Gitlab server if localMode is set to false
        "maxConcurrentWorkers": 1
        
        # Optional flag: set the logging level. Valid values are "debug", "info", "warn", "error" (case insensitive). Default value is "info".
        "globalLogLevel": "debug"
    },
    "repository": {
        # The Name of the repositories. Multiple repositories will be named <repositoryName>_group_<uuid>, <repositoryName>_tests_group_<uuid> ...
        "repositoryName": "st2-praktikum",

        # The number of repositories which will be created. Only relevant if there were no repositoryMembers defined
        "repositoryCount": 0,

        # The user names of the members which get access to repositories
        "repositoryMembers": [
            ["st2-praktikum"]
        ]
    },
    "local": {
        # The file path to an origin repository which should be used for local testing
        "originRepositoryFilePath": ""
    },
    "remote": {
        # Id of the repository you want to clone
        "originRepositoryId": 1012,

        # The ids of the target groups where all repositories will be located
        "codeRepositoryTargetGroupId": 161,
        "testRepositoryTargetGroupId": 170,

        # If set to true all existing repositories inside the defined groups will be deleted
        "deleteExistingRepositories": false,

        # Define wheather users are added as maintainers or as quests
        "addUsersAsGuests": false
    }
}

If localMode is set to true the application will only generate possible variable variations and randomize files based on a folder which contains the origin repository. This folder should be located in the folder resources/test/input. If the folder resources/test/input does not exist create it within the root folder of this tool or run the tool once in test mode which will generate this folder automatically. This can be used to get an idea which repositories will result based on the configs. The following example shows the location of the origin folder:

root_of_tool
    - build
    - node_modules
    - src
    - .gitignore
    - .Readme
    - resources
        - test
            - input
                - origin-folder
                    - src
                    - .gitignore
                    - .Readme

If you dont want to copy the origin repository each time you want to test a new version specify the file path to the origin repository in the config under local.originRepositoryFilePath.

Partially repository generation

While running the automated-repo-setup in local mode you have the option to partially generate repositories.

To do so, just configure the repositoryConfig.json* as such:

{
    "general": {
        "localMode": true
    },
    "local": {
        "subsetPaths": [
          "README.md",
          "path/to/malfunction/file.eof"
        ]
    }
}

*only partially shown

Start generation

npm start

Generated files are located under: resources/output/

File Assignment

Although code and test files are separeted into two repositories the exercise only consists of one repository called the origin. It would be really troublesome if you would have to update two repositories all the time while creating a new exercise. Because of that there has to be a way to determine wheather a file has to be copied to the code project, the test project or both. If you want some files to only be copied to a specific repository you can express this behaviour in the filename.

  • If the filename contains the string _coderepo the file will only be copied to the code repository.
  • If the filename contains the string _testrepo the file will only be copied to the test repository.
  • If the filename contains the string _norepo the file will not be copied to the repositories. This can be used to store config files from this tool directly in the origin repository.
  • If the filename contains none of those the file will be copied to both repositories.

File Converter

If you want to convert or manipulate certain repository files during the repository generation process File Converters (File Manipulators) can be used. Currently there is only one type of File Manipulator available. Additional converters can be easily added by extending the codebase of the Divekit. The already existing File Manipulartor is called UmletFileManipulator. This manupulator is used to convert the individualized xml representations of Umlet diagrams to image formats. This convert step can not be skipped because it is not possible to replace variables in image representations of umlet diagrams. Therefore the process of individualizing UML diagrams created with umlet is as follows:

UML diagram with placeholder variables (xml) -> UML diagram with already replaced content (xml) -> UML diagram with already replaced content (image file format)

Test Overview

To give an overview on passed and failed tests of a repository a test overview page will be generated using the project report-mapper and report-visualizer. The tools are called within the " .gitlab-ci.yml" file in the deploy stage.

Repository Overview

If you have the generation of the overview table enabled in the repositoryConfig the destination and the name of the overview table can be defined in the file repositoryConfig as well:

{
    "overview": {
        "generateOverview": true,
        "overviewRepositoryId": 1018,
        "overviewFileName": "st2-praktikum"
    }
}

Given the config shown above a markdown file will be generated which includes a summary of all generated repositories and their members. After that the file will be uploaded to the configured repository:

GroupCode RepoTest RepoTest Page
user1https://example.com/folder1/repo1https://example.com/folder1/tests/repo1https://pages.example.com/folder1/tests/repo1
user2https://example.com/folder1/repo2https://example.com/folder1/tests/repo2https://pages.example.com/folder1/tests/repo2
user3https://example.com/folder1/repo3https://example.com/folder1/tests/repo3https://pages.example.com/folder1/tests/repo3

Solution Deletion

If you want solutions which are contained in your origin project to be removed while creating the code and test repositories enable solution deletion in the repositoryConfig. The originRepositoryConfig specifies the keywords which are used to either

  • delete a file
  • delete a paragraph
  • replace a paragraph

This can be shown best with an example:

// TODO calculate the sum of number 1 and number 2 and return the result
public static int sumInt(int number1, int number2) {
    //unsup
    return number1 + number2;
    //unsup
}

// TODO calculate the product of number 1 and number 2 and return the result
public static int multiplyInt(int number1, int number2) {
    //delete
    return number1 * number2;
    //delete
}

will be changed to:

// TODO calculate the sum of number 1 and number 2 and return the result
public static int sumInt(int number1, int number2) {
    throw new UnsupportedOperationException();
}

// TODO calculate the product of number 1 and number 2 and return the result
public static int multiplyInt(int number1, int number2) {
    
}

The corresponding config entry in the originRepositoryConfig would be:

{
    "solutionDeletion": {
        "deleteFileKey": "//deleteFile",
        "deleteParagraphKey": "//delete",
        "replaceMap": {
            "//unsup": "throw new UnsupportedOperationException();"
        }
    }
}

A file containing the string “//deleteFile” would be deleted.

Individualization

If you want your project to be randomized slightly use the configuration files variationsConfig.json, * variableExtensionsConfig* and relationsConfig to create variables. Variables can be referenced later by their name encapsulated in configured signs. e.g.: $ThisIsAVariable$.

Variable Generation

There are three types of variables:

Object Variables

Object Variables are used to randomize Entities and Value Objects. Such variables are created by defining one or multiple ids and an array of possible object variations. Object variations can contain attributes which will later be transformed into a variable. An example attribute could be Class which contains the class name of an entity. Keep in mind that attributes can not only be limited to a single primitive value but can only be expressed as a new object inside the json. The following json shows a possible declaration of two object variations inside the variationsConfig:

{
 "ids": "Vehicle",
 "objectVariations": [
   {
     "id": "Car",
     "Class": "Car",
     "RepoClass": "CarRepository",
     "SetToOne": "setCar",
     "SetToMany": "setCars"
   },
   {
     "id": "Truck",
     "Class": "Truck",
     "RepoClass": "TruckRepository",
     "SetToOne": "setTruck",
     "SetToMany": "setTrucks"
   },
   {
     "id": "Train",
     "Class": "Train",
     "RepoClass": "TrainRepository",
     "SetToOne": "setTrain",
     "SetToMany": "setTrains"
   }
 ],
 "variableExtensions": [
   "Getter"
 ]
},
{
"ids": ["Wheel1", "Wheel2"],
"objectVariations": [
{
"id": "FrontWheel",
"Class": "FrontWheel",
"RepoClass": "FrontWheelRepository",
"SetToOne": "setFrontWheel",
"SetToMany": "setFrontWheels"
},
{
"id": "BackWheel",
"Class": "BackWheel",
"RepoClass": "BackWheelRepository",
"SetToOne": "setBackWheel",
"SetToMany": "setBackWheels"
}
],
"variableExtensions": ["Getter"]
}

The defined object variations are now randomly assigned to the variables Vehicle, Wheel1 and Wheel2. The following dictionary shows variables which result from above declaration:

VehicleClass: 'Truck',
VehicleRepoClass: 'TruckRepository',
VehicleGetToOne: 'getTruck',
VehicleGetToMany: 'getTrucks',
VehicleSetToOne: 'setTruck',
VehicleSetToMany: 'setTrucks',
Wheel1Class: 'Backwheel',
Wheel1RepoClass: 'BackwheelRepository',
Wheel1GetToOne: 'getBackWheel',
Wheel1GetToMany: 'getBackWheels',
Wheel1SetToOne: 'setBackWheel',
Wheel1SetToMany: 'setBackWheels',
Wheel2Class: 'FrontWheel',
Wheel2RepoClass: 'FrontWheelRepository',
Wheel2GetToOne: 'getFrontWheel',
Wheel2GetToMany: 'getFrontWheels',
Wheel2SetToOne: 'setFrontWheel',
Wheel2SetToMany: 'setFrontWheels'

In the example above you can see that some variables could be derived from already existing variables. The setter variables are a perfect example for this. Such variables can also be defined through variable extensions. This is done for the getter variables in the example. Two steps are required to define such derived variables:

  1. Define a rule for a variable extension in the config variableExtensionsConfig.json:
{
 "id": "Getter",
 "variableExtensions": {
   "GetToOne": {
     "preValue": "get",
     "value": "CLASS",
     "postValue": "",
     "modifier": "NONE"
   },
   "GetToMany": {
     "preValue": "get",
     "value": "PLURAL",
     "postValue": "",
     "modifier": "NONE"
   }
 }
}

The value attribute references an already existing variable which is modified through the given modifier. Valid modifiers can for example convert the given variable to an all lower case variant.

The resulting value is then concatenated with the preValue and postValue like so: preValue + modifier(value) + postValue.

  1. Define a certain variable extension for an object by adding the id of the variable extension to the list of variable extensions of an object (see example above).

Relation Variables

Relation Variables are used to randomize relations between entities. They are defined by declaring an array of * relationships* and an array of relationObjects inside the variationsConfig. Both arrays must be of equal length because each set of relationObjects will be assigned to an relationShip.

In order to define a relationShip you have to provide an id and a reference to an relationShip type. These types are defined in the file relationsConfig and can contain any kind of attributes:

{
 "id": "OneToOne",
 "Umlet": "lt=-\nm1=1\nm2=1",
 "Short": "1 - 1",
 "Description": "one to one"
}

In order to define a set of relationObjects you have to provide an id and two object references. The following json shows an example definition for relations:

{
 "relationShips": [
   {
     "id": "Rel1",
     "relationType": "OneToOne"
   },
   {
     "id": "Rel2",
     "relationType": "OneToMany"
   }
 ],
 "relationObjects": [
   {
     "id": "RelVehicleWheel1",
     "Obj1": "Vehicle",
     "Obj2": "Wheel1"
   },
   {
     "id": "RelVehicleWheel2",
     "Obj1": "Vehicle",
     "Obj2": "Wheel2"
   }
 ]
}

For each relationship two kind of variables will be generated.

  • One kind of variable will clarify which objects belong to a certain relationship. These variables will start with for example Rel1 as defined in the section relationShips.

  • Another kind of variable will clarify which relationship belongs to a set of objects. These variables will start with for example RelVehicleWheel1 as defined in the section relationObjects.

For each of these two kinds a set of variables will be gernated. The first set contains attributes of the relation types defined in the relationsConfig. The other set contains attributes of the objects defined in the variationsConfig.

The following json shows a set of variables which will be generated for a single relationship:

Rel1_Umlet: 'lt=-\nm1=1\nm2=1',
Rel1_Short: '1 - 1',
Rel1_Description: 'one to one',
Rel1_Obj1Class: 'Truck',
Rel1_Obj1RepoClass: 'TruckRepository',
Rel1_Obj1GetToOne: 'getTruck',
Rel1_Obj1GetToMany: 'getTrucks',
Rel1_Obj1SetToOne: 'setTruck',
Rel1_Obj1SetToMany: 'setTrucks',
Rel1_Obj2Class: 'Backwheel',
Rel1_Obj2RepoClass: 'BackwheelRepository',
Rel1_Obj2GetToOne: 'getBackWheel',
Rel1_Obj2GetToMany: 'getBackWheels',
Rel1_Obj2SetToOne: 'setBackWheel',
Rel1_Obj2SetToMany: 'setBackWheels',

RelVehicleWheel1_Umlet: 'lt=-\nm1=1\nm2=1',
RelVehicleWheel1_Short: '1 - 1',
RelVehicleWheel1_Description: 'one to one',
RelVehicleWheel1_Obj1Class: 'Truck',
RelVehicleWheel1_Obj1RepoClass: 'TruckRepository',
RelVehicleWheel1_Obj1GetToOne: 'getTruck',
RelVehicleWheel1_Obj1GetToMany: 'getTrucks',
RelVehicleWheel1_Obj1SetToOne: 'setTruck',
RelVehicleWheel1_Obj1SetToMany: 'setTrucks',
RelVehicleWheel1_Obj2Class: 'Backwheel',
RelVehicleWheel1_Obj2RepoClass: 'BackwheelRepository',
RelVehicleWheel1_Obj2GetToOne: 'getBackWheel',
RelVehicleWheel1_Obj2GetToMany: 'getBackWheels',
RelVehicleWheel1_Obj2SetToOne: 'setBackWheel',
RelVehicleWheel1_Obj2SetToMany: 'setBackWheels',

Logic variables

Logic Variables are used to randomize logic elements of an exercise. The idea behind this concept is that you can define multiple groups of business logic, but only one group of business logic is assigned to each individual exercise. Logic variables can also be used to define text which decribes a certain business logic. Here is an example for the definition of logic variables:

{
  "id": "VehicleLogic",
  "logicVariations": [
    {
      "id": "VehicleCrash",
      "Description": "Keep in mind that this text is just an example. \nThis is a new line"
    },
    {
      "id": "VehicleShop",
      "Description": "The Vehicle Shop exercise was selected"
    }
  ]
}

Above example will generate only one variable which is called VehicleLogicDescription. The interesting part of the logic variations are the ids. If you add an underscore followed by such an id to the end of a file this file is only inserted into an individual repository if the said id was selected during the randomization.

e.g.: The file VehicleCrashTest_VehicleCrash.java is only inserted if the logic VehicleCrash was selected. The file VehicleShopTest_VehicleShop.java is only inserted if the logic VehicleShop was selected.

This can be used to dynamically insert certain test classes which test a specific business logic. If a certain test class was not inserted to an individual repository the one who solves this exercise does not have to implement the corresponding business logic.

Variable Post Processing

Often variable values are needed not only in capital letters but also in lower case format. Therefore for each generated variable there will be three different types generated:

  1. The first type is the variable itselt without further changes e.g.: VehicleClass -> MonsterTruck

  2. The second type sets the first char to lower case e.g.: vehicleClass -> monsterTruck

  3. The third type sets all chars to lower case e.g.: vehicleclass -> monstertruck

Variable Replacement

In the process of repository individualization all defined variables will be replaced in all the origin repository files with their corresponding value. Typically every variable which should be replaced is decorated with a specific string at the start and the end of the variable e.g: $VehicleClass$ or xxxVehicleClassxxx. This string helps identifying variables. If needed this string can be set to an empty string. In this case the variable name can be inserted in specific files without futher decoration. This can lead to problems in terms of variable replacement so that the Divekit will take certain measures to ensure that all variables are replaced correctly. This decoration string can be configured in the originRepositoryConfig:

{
    "variables": {
        "variableDelimeter": "$"
    }
}

Variable Value Warnings

If this feature is activated within the repositoryConfig the tool will spit out warnings which will inform you if there are suspicious variable values remaining after variable placeholders have been replaced. If for example a learner has to solve an exercise which contains Trucks instead of Cars (see config above) then the solution of this leaner should not contain variable values like “Car”, “CarRepository”, “setCar” or “setCars”. In the originRepositoryConfig you can define a whitelist of file types which should be included in the warning process. Additionally an ignoreList can be configured. If a variable value is contained in one of the defined values inside the ignoreList this specific variable value will not trigger a warning. The following json is an example for the discussed configurable options:

{
    "warnings": {
        "variableValueWarnings": {
            "typeWhiteList": [
                "json",
                "java",
                "md"
            ],
            "ignoreList": [
                "name",
                "type"
            ]
        }
    }
}

Individual Repository Persist

If you run the tool the default behaviour is that it will generate individual variables for each repository which is specified in the repositoryConfig. If you want to reuse already generated variables you can set " useSavedIndividualRepositories" to “true” and define a file name under “savedIndividualRepositoriesFileName”. The file name is relative to the folder “resources/individual_repositories”. These options are defined in the repositoryConfig:

{
    "individualRepositoryPersist": {
        "useSavedIndividualRepositories": true,
        "savedIndividualRepositoriesFileName": "individual_repositories_22-06-2021 12-58-31.json"
    }
}

A single entry in such an individual repositories file can be edited with a normal text editor and could look like this:

{
    "id": "67e6be38-ae36-4fbf-9d03-0993d97f7559",
    "members": [
        "user1"   
    ],
    "individualSelectionCollection": {
        "individualObjectSelection": {
            "Vehicle": "Truck",
            "Wheel1": "BackWheel",
            "Wheel2": "FrontWheel"
        },
        "individualRelationSelection": {
            "Rel1": "RelVehicleWheel2",
            "Rel2": "RelVehicleWheel1"
        },
        "individualLogicSelection": {
            "VehicleLogic": "VehicleCrash"
        }
    }
}

Components

The component diagram above shows the components of the Divekit which are used in the process of generating and individualizing repositories. In the following the repository generation process will be explained step by step and the components relevant in each step are described:

  1. The Repository Creator delegates most of the tasks involved in the repository generation process to other components. Before repositories are generated the Repository Creator calls the Repository Adapter to prepare the environment. This includes for example creating empty folders for repositories or deleting previous data which is contained in the destination folder. A Repository Adapter functions like an interface to the environment in which new repositories are being generated. At the moment there are two kinds of Repository Adapters: One for the local file system and one for Gitlab.

  2. The Content Retriever retrieves all files from the configured origin repository. In order to access the origin repository the component will use a Repository Adapter. If solution deletion is activated the solution which is contained inside the origin repository will be deleted inside the retrived origin files (not in the origin repository).

  3. For each configured repository or learner a specific configuration is generated by the Individual Repository Manager. This configuration is used by other components while generating repositories and contains for example a unique id and the usernames of learners. If individualization is activated for each configuration specific variations and corresponding variables are generated by the Variation Generator. These variations and variables will also be contained in the seperate configurations which are generated by the Individual Repository Manager.

  4. For each repository configuration generated by the Individual Repository Manager in the previvous step a Content Provider is instantiated. After varying the content by using the randomly generated variations from the previous step the defined File Manipulators (File Converters) are executed. Finally the resulting files are pushed to a new repository using a Repository Adapter.

  5. After all Content Providers are finished with generating each corresponding repository the Overview Generator collects basic information from the Content Providers and generates an overview of all links leading to Code Projects, Test Projects and Test Pages.

The following table lists the relevant packages inside the codebase for each component:

ComponentRelevant Packages
Repository Creatorrepository_creation
Individual Repository Managerrepository_creation
Variation Generatorcontent_variation
Content Retrievercontent_manager, solution_deletion
Content Providercontent_manager, content_variation, file_manipulator
Overview Generatorgenerate_overview
Repository Adapterrepository_adapter

Design-Decisions

Design-DecisionExplanation
Typescript chosen as programming languageEasy handling of dynamic json structures, Good API support for Gitlab, Platform independent, Can be executed locally with nodejs

5 - cli

CLI to simplify and orchestrate the usage of multiple divekit tools.

The following documentation is more a concept than a description of the finished application.

Getting started

Prerequisites

  • npm
  • python
  • pip

usage

npm i @divekit/cli -g # install

mkdir st-2042 # create empty project directory
cd st-2042

divekit init -g 152 -c 14 # initializes in GitLab Dir with groupId 152 and with config base from project in GitLab dir 14

file structure

st-2042
├── .divekit # all configs, tools etc. needed for this project and the divekit-cli
|   ├── config
|   |    ├── automated-repo-setup
|   |    ├── ...
|   |    └── basic.yml # name, milestone data, ...
|   └── tools
|        ├── access-manager
|        └── ...
├── milestone1
|   ├── config.yml # milestone specific domain-config
|   └── origin-repo
|       └── ...
├── milestone2
└── ...

basic.yml example

name: ST1-2022
rootStaffGroupId: 42
rootStudentGroupId: 43
overviewGroupId: 44
milestones:
  m1:
    groupId: 54
    start: 20.08.2023
    end: 04.09.2023
    codeRepoGroupId: 142
    testRepoGroupId: 143
  m2:
    groupId: 67
    start: 07.09.2023
    end: 27.09.2023
    codeRepoGroupId: 152
    testRepoGroupId: 153
  # ...

Functionality

divekit init

Initialize a new divekit project, done once per semester and module. Must be used in an empty directory otherwise it only checks and installs existing dependencies needed for tools. An example for a generated file structure can be seen above.

Options

  • -w / --wizard install via wizard which asks and explains all parameters and gives options if available
  • -g / --groupId groupId for the corresponding GitLab directory (should be empty as well)
  • -n / --name set a name for the project (Defaults to name of parent directory)
  • -c / --config <groupId> copy a previous used config from given project (Nice to have: also works on old projects)

errors

  • exits if node, python or pip are not available

Example usage

Creating a new project with the help of the wizard.

mkdir st1-2023
divekit init --wizard
$ "Please enter the GitLab GroupId where all Staff content will be managed (should be empty):"
$ 150 # <user_input>
$ "Please enter the GitLab GroupId where all student repositories will be published:"
$ 151 # <user_input>
$ "Please enter a name for the project (default: st1-2023)"
$ # <user_input>
$ "Do you want to use a already existing configuration, then please enter a groupId for the corresponding project otherwise a basic config will be initialized"
$ 42 # <user_input>
$ "Divekit initialized successfully"
# Result: Directory .divekit created locally and pushed to GitLab

divekit milestone

manage the creation, testing, publication and more for a milestone.

if not otherwise configured naming, config etc. will be copied from the last milestone

Options

  • -w / --wizard create the next milestone via wizard which asks and explains all parameters and gives options if available (for more information see the example below)
  • -n / --name <string> set a custom name for the milestone (default to milestone)
  • -t / --template <groupId> create the next milestone based of an already existing origin repo
  • -f / --fresh while creating a new milestone ignore already existing and their configuration
  • -i / --id <number> specify milestoneId (default start with 1 or last existing milestone +1)
  • -l / --local <number> create a given number student-repos locally in demo-folder
  • -r / --random <none || all | roundrobin> configure randomization. None at all, Everything (like normal student repo generation) or roundrobin which tries to create uniform distribution of all variations
  • -p / --publish creates and publishes all repos for students based on origin repo
  • -d / --demo <repoCount> create a given count of test repositories

Example usage

Create the first milestone with the help of the wizard

divekit milestone --wizard
$ "creating the first milestone of this project"
$ "please enter a number for the first milestone (default: 1)"
$ # <user_input>
$ "Do you want to use a already existing milestone as template?"
$ "Please enter the groupId otherwise skip with Enter"
$ 47 # <user_input>
$ "Milestone created - pls configure the origin repository"
# Result:
# - milestone1 directory created based on template repo
# - updated configs with new milestone and push everything to GitLab
# - also created directories for code/test repositories in GitLab.

Create further milestone with the help of the wizard

divekit milestone --wizard
$ "creating a milestone for st1-2023"
$ "please number for the milestone (default: 2)"
$ # <user_input>
$ "Do you want to use a already existing milestone as template?"
$ "Please enter the groupId otherwise the milestone2 will be based on milestone1"
$ # <user_input>
$ "Milestone created - pls configure the origin repository"
# Result:
# - milestone2 directory created based on milestone1 repo
# - updated configs with new milestone and push everything to GitLab
# - also created directories for code/test repositories for milestone2 in GitLab.

divekit patch

Edit the current or given milestone.

Options

  • -m / --milestone <numer> number of milestone you want to edit (default: latest existing milestone)
  • -g / --g <[groupId]> list of groupIds where to publish edits (default: student repositories of the milestone)
  • -f / --files <[names]> list of files to edit/create
  • -c / --commit <msg>: custom commit msg (default: “fix 42”)
  • -d / --debug: sets loglevel to debug

Use Cases and other ideas - which should be considered

  • create repos for latecomers, e.g. because they passed the previous milestone after a discussion or forgot to register.
  • divide the config structure between technical and domain configuration

6 - Divekit Language Plugin

Supports Code-Completion for Divekit generated individualization variables

The documentation is not yet written. Feel free to add it yourself ;)

7 - Divekit Language Server

Supports Code-Completion for Divekit generated individualization variables

The documentation is not yet written. Feel free to add it yourself ;)

8 - Evaluation Processor

Counts points of detailed evaluation schemes and generates a file which includes the points per exercise and the total points.

The documentation is not yet written. Feel free to add it yourself ;)

9 - Operator

UI-Tooling to simplify user interaction for manual testing of milestones or exams.

The documentation is not yet written. Feel free to add it yourself ;)

Developed in a “Praxisprojekt” and not yet tested in practice.

10 - Passchecker

Tool to determine who has passed / failed a given milestone.

The documentation is not yet written. Feel free to add it yourself ;)

11 - Plagiarism Detector

Detects variations of defined variables which should not be contained in specific student repositories.

The documentation is not yet written. Feel free to add it yourself ;)

12 - Repo Editor

Tooling to allow multi-edit for a batch of specified repositories.

The documentation is in a very early stage and some parts might be outdated.

The divekit-repo-editor allows the subsequent adjustment of individual files over a larger number of repositories.

The editor has two different functionalities, one is to adjust a file equally in all repositories and the other is to adjust individual files in repositories based on the project name.

Setup & Run

  1. Install NodeJs (version >= 12.0.0) which is required to run this tool. NodeJs can be acquired on the website nodejs.org.

  2. To use this tool you have to clone this repository to your local drive.

  3. This tool uses several libraries in order to use the Gitlab API etc. Install these libraries by running the command npm install in the root folder of this project.

  4. Configure Token

    1. Navigate to your Profile and generate an Access Token / API-Token in order to get access to the gitlab api
    2. Copy the Access Token
    3. Rename the file .env.example to .env
    4. Open .env and replace YOUR_API_TOKEN with the token you copied.
  5. Configure the application via src/main/config/ and add files to assets/, see below for more details.

  6. To run the application navigate into the root folder of this tool and run npm start. All assets will be updated. Use npm run useSetupInput if you want to use the latest output of the automated-repo-setup as input for the edit.

Configuration

Place all files that should be edited in the corresponding directories:

input
└── assets
   ├── code
   │  ├── PROJECT-NAME-WITH-UUID
   │  │  └── <add files for a specifig student here>
   │  └── ...
   ├── test
   │  ├── PROJECT-NAME-WITH-UUID
   │  │  └── <add files for a specifig student here>
   │  └── ...
   └── <add files for ALL repos here>

src/main/config/editorConfig.json: Configure which groups should be updated and define the commit message:

{
  "onlyUpdateTestProjects": false,
  "onlyUpdateCodeProjects": false,
  "groupIds": [
    1862
  ],
  "logLevel": "info",
  "commitMsg": "individual update test"
}

Changelog

1.0.0

  • add individual updates per project

0.1.1

  • add feature to force create/update

0.1.0

  • add feature to update or create files based on given structure in asset/*/ for all repositories

0.0.1

  • initialize project based on the divekit-evaluation-processor

13 - Report Mapper

The report-mapper is an integral part of the GitLab-CI/CD-pipeline for all test repositories. The report-mapper ensures that various inputs from Surefire-Reports, PMD and Checkstyle codestyle checks are converted into a uniform format. The produced result is an XML file and subsequently used by the report-visualizer for a readable output.

Architecture overview

Usage in the pipeline

For the usage in the pipeline you just need node as prerequisite and then install and use the report-mapper as following:

npm install @divekit/report-mapper
npx report-mapper

Keep in mind, to provide needed input-data based on your configuration.

Complete sample test-repo pipeline-script

image: maven:3-jdk-11

stages:
  - build
  - deploy

build: # Build test reports
  stage: build
  script:
    - chmod ugo+x ./setup-test-environment.sh
    - ./setup-test-environment.sh # copy code from code repo and ensure that test are NOT overridden
    - mvn pmd:pmd # build clean code report
    - mvn verify -fn # always return status code 0 => Continue with the next stage
  allow_failure: true
  artifacts: # keep reports for the next stage
    paths:
      - target/pmd.xml
      - target/surefire-reports/TEST-*.xml

pages: # gather reports and visualize via gitlab-pages
  image: node:latest
  stage: deploy
  script:
    - npm install @divekit/report-mapper
    - npx report-mapper # run generate unified.xml file
    - npm install @divekit/report-visualizer
    - npx report-visualizer --title $CI_PROJECT_NAME # generate page

  artifacts:
    paths:
      - public
  only:
    - master

configuration

The report mapper is configurable in two main ways:

  1. By defining which inputs are expected and therefore should be computed. This is configurable via parameters. You can choose from the following: pmd, checkstyle* and surefire. If none are provided it defaults to surefire and pmd.
npx report-mapper [surefire pmd checkstyle]
  1. The second option is specific to PMD. PMD for itself has a configuration-file pmd-ruleset.xml which configures which PMD rules should be checked. The report mapper also reads from this file and will design the output based on available rules.
    Note: The assignment of PMD rules to clean code and solid principles is as of now hardcoded and not configurable.

*The checkstyle-mapper is currently not included in the testing and therefore should be used with caution.

Example simplified pmd-ruleset.xml:

<?xml version="1.0"?>
<ruleset name="Custom Rules"
         xmlns="http://pmd.sourceforge.net/ruleset/2.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://pmd.sourceforge.net/ruleset/2.0.0 https://pmd.sourceforge.io/ruleset_2_0_0.xsd">
  <description>
    Clean Code Rules
  </description>

  <!-- :::::: CLEAN CODE :::::: -->
  <!-- Naming rules -->
  <rule ref="category/java/codestyle.xml/ClassNamingConventions"/>
  <rule ref="category/java/codestyle.xml/FieldNamingConventions"/>

  <!-- :::::::: SOLID :::::::: -->
  <!-- SRP (Single Responsibility Principle) rules -->
  <rule ref="category/java/design.xml/TooManyFields"/> <!-- default 15 fields -->
  <rule ref="category/java/design.xml/TooManyMethods"> <!-- default is 10 methods -->
    <properties>
      <property name="maxmethods" value="15" />
    </properties>
  </rule>
</ruleset>

Getting started

Install

Clone the repository and install everything necessary:

# HTTP
git clone https://github.com/divekit/divekit-report-mapper.git
# SSH
git clone git@github.com:divekit/divekit-report-mapper.git

cd ./divekit-report-mapper

npm ci # install all dependencies

npm test # check that everything works as intended

Provide input data

The input data should be provided in the following structure:

divekit-report-mapper
├── target
|   ├── surefire-reports
|   |    ├── fileForTestGroupA.xml
|   |    ├── fileForTestGroupB.xml
|   |    └── ...
|   ├── checkstyle-result.xml
|   └── pmd.xml
└── ...

You can find some examples for valid and invalid inputs in the tests: src/test/resources

npm run dev

Understand the Output

The result from the divekit-report-mapper is a XML-File (target/unified.xml). It contains the result of all inputs sources in a uniform format. This also includes errors if some or all inputs provided invalid or unexpected data.

Example with only valid data:

<?xml version="1.0" encoding="UTF-8"?>
<suites>
  <testsuite xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation=""
             name="E2CleanCodeSolidManualTest" failures="0" type="JUnit" status="failed">
    <testcase name="testCleanCodeAndSolidReview" status="failed" hidden="false">
      <error message="-%20break%20pipeline%20%3C--%0A" type="java.lang.Exception"><![CDATA[java.lang.Exception:
                - break pipeline <--
            	at thkoeln.st.st2praktikum.exercise1.E2CleanCodeSolidManualTest.testCleanCodeAndSolidReview(E2CleanCodeSolidManualTest.java:13)]]>
      </error>
    </testcase>
  </testsuite>

  <testsuite name="Clean-Code-Principles by PMD" status="failed" type="CleanCode">
    <testcase name="Keep it simple, stupid" status="passed" hidden="false"></testcase>
    <testcase name="Meaningful names" status="failed" hidden="false">
      <error type="LocalVariableNamingConventions" location="Line: 90 - 90 Column: 13 - 22"
             file="C:\work\gitlab-repos\ST2MS0_tests_group_d5535b06-ae29-4668-8ad9-bd23b4cc5218\src\main\java\thkoeln\st\st2praktikum\bad_stuff\Robot.java"
             message="The local variable name &apos;snake_case&apos; doesn&apos;t match &apos;[a-z][a-zA-Z0-9]*&apos;"></error>
    </testcase>
  </testsuite>

</suites>

For further examples see tests src/test/resources.

Deployment

All pipeline scripts normally use the latest version from npmjs.com.

The repository is set up with three different GitHub Actions workflows wich trigger on pushes to the branches main, stage and development.

  • main: Build, run tests and publish new npm package. Fails if: build/tests fail, the version is a beta version or the version has not been updated
  • stage: same as main but the version must be a beta-version and the package is tagged as beta
  • development: Build and run all tests

Version

Complete packages available at npmjs.com. The versioning is mostly based on semantic versioning.

1.1.2

  • fixed a bug which caused packages to fail if there were build through the automated GitHub Actions workflow

1.1.1

  • moved docs from readme to divekit-docs
  • add continues delivery pipeline
  • switch to eslint
  • add configurability of pmd principles
  • add surefire parsing error flag
  • update scripts according to new report-visualizer naming

1.0.8

  • Parameters processing added, which allow a restriction of the used mappers
  • Error handling: If a mapper does not deliver a valid result, an error is indicated in the unified.xml.

14 - Report Visualizer

The divekit-report-visualizer creates a website based on a xml file, which visually prepares the content or the report and can be made available via GitHub/Lab-Pages, for example.

Architecture overview

Usage in the pipeline

For the usage in the pipeline you just need node as prerequisite and provide the input-data: target/unified.xml. Install and use the report-visualizer as following:

npm install @divekit/report-visualizer
npx report-visualizer --title PROJECT_NAME

Complete sample test-repo pipeline-script

image: maven:3-jdk-11

stages:
  - build
  - deploy

build: # Build test reports
  stage: build
  script:
    - chmod ugo+x ./setup-test-environment.sh
    - ./setup-test-environment.sh # copy code from code repo and ensure that test are NOT overridden
    - mvn pmd:pmd # build clean code report
    - mvn verify -fn # always return status code 0 => Continue with the next stage
  allow_failure: true
  artifacts: # keep reports for the next stage
    paths:
      - target/pmd.xml
      - target/surefire-reports/TEST-*.xml

pages: # gather reports and visualize via gitlab-pages
  image: node:latest
  stage: deploy
  script:
    - npm install @divekit/report-mapper
    - npx report-mapper # run generate unified.xml file
    - npm install @divekit/report-visualizer
    - npx report-visualizer --title $CI_PROJECT_NAME # generate page

  artifacts:
    paths:
      - public
  only:
    - master

Getting started

Install

Clone the repository and install everything necessary:

# HTTP
git clone https://github.com/divekit/divekit-report-visualizer.git
# SSH
git clone git@github.com:divekit/divekit-report-visualizer.git

cd ./divekit-report-visualizer

npm ci # install all dependencies

Provide input data

The input data should be provided in the following structure:

divekit-report-visualizer
├── target
|   └── unified.xml
└── ...

Run it

Directly with provided input target/unified.xml

node bin/report-visualizer

Use predefined input assets/xml-examples/unified.xml

npm run dev

Or use divekit-report-mapper result*

npm run dev++

*Requirement is that the divekit-report-visualizer is located in the same directory as the divekit-report-mapper.

Output (GitLab Pages)

Output in /public directory. Which is used for GitLab-pages or could be mounted anywhere.

divekit-report-visualizer
├── target
|   └── unified.xml
├── public
|   ├── index.html
|   └── style.css
└── ...

The following picture shows an example output with passed test (green), test failures (orange), errors (red) and a note (gray).

Deployment

Currently, completely manually. In the future done similar to report-mapper

All pipeline scripts normally use the latest version from npmjs.com.

Version

Complete packages available at npmjs.com. The versioning is mostly based on semantic versioning.

1.0.3

  • Updating naming: form divekit-new-test-page-generator to divekit-report-visualizer

1.0.2

  • Added hidden metadata in the header indicating the number of failed tests.
  • Added possibility to pass a special ‘NoteTest’ test case which is displayed separately.
  • Updated the error message for generation problems so that it is displayed even if only parts of the test page could not be generated.
  • Fixed an error where the test page could not be generated if there was no input.

15 - Test Library

Library which contains a set of generic tests for different testing purposes.

The documentation is not yet written. Feel free to add it yourself ;)

16 - Test page generator

Old version of the report-visualizer. Working based only on provided surefire-reports.

The documentation is not yet written. Feel free to add it yourself ;)

17 - Contribution Guidelines

Contribution Guidelines

18 - Glossary

The documentation is in a very early stage and some parts might be outdated.

Possible candidates:

  • Origin Project
  • Learner Project
  • Test Project
  • Gitlab CI
  • hidden files