4 - Automated Repo Setup
The automated-repo-setup is the heath of divekit. It implements the core set of features which are used to automatically provide programming exercises in terms of repositories on the platform Gitlab. Optionally, exercises can be individualized using the integrated mechanism for individualization.
Setup & Run
Install NodeJs (version >= 12.0.0) which is required to run this tool. NodeJs can be acquired on the
website nodejs.org.
To use this tool you have to clone the repository to your
local drive.
This tool uses several libraries in order to use the Gitlab API etc. Install these libraries by running the
command npm install
in the root folder of this project.
Local/GitLab usage
- For local use only
- Copy the origin repository into the folder resources/test/input. If this folder does not exist, create the
folder test inside the resources folder and then create the folder input in the newly created folder test
- The generated repositories will be located under resources/test/output after running the tool
- For use with Gitlab:
- Navigate to https://git.st.archi-lab.io/profile/personal_access_tokens (if you are using the gitlab instance *
git.st.archi-lab.io*) and generate an Access Token / API-Token in order to get access to the gitlab api
- Copy the Access Token
- Rename the file .env.example to .env
- Open .env and replace YOUR_API_TOKEN with the token you copied.
- Configure the source repository and target group in the config
Before you can configure or run this tool you have to copy all the example config files inside the *
resources/examples/config* folder to the resources/config folder in order to create your own config files. If you
want to change the standard behaviour you can configure this tool by editing the configs.
To run the application navigate into the root folder of this tool and run npm start
. The repositories will now
be generated.
Configuration
Before you can configure this tool you have to copy all the relevant example config files inside the *
resources/examples/config* folder to the resources/config folder in order to create your own config files. If you want
to change the standard behaviour you can configure this tool by editing the configs.
The Divekit uses two types of configs, technical configs and domain specific configs. The contents of technical configs
often change each time repositories are generated using the Divekit. Therefore these type of configs are located in
the resources/config folder of the Divekit. Domain configs do not change each time new repositories are generated
because they depend on the type of exercise and the corresponding domain. As a result these configs should be contained
in the Origin Project (they dont have to). In the following the different configs, their purpose and their type are
listed:
Config | Purpose | Type |
---|
repositoryConfig | Configure the process of repository generation | Technical Config |
originRepositoryConfig | Configure solution deletion and variable warnings | Domain Config |
variationsConfig | Configure different types of variations | Domain Config |
variableExtensionsConfig | Configure different extensions which are used to generate derivated variables | Domain Config |
relationsConfig | Configure properties of relations which are used to generate relation variables | Domain Config |
Features
Repository Generation
If the Divekit is run the tool will generate repositories based on the configured options defined in the *
repositoryConfig*. The following example shows the relevant options where each option is explained in short:
{
"general": {
# Decide wheather you just want to test locally. If set to false the Gitlab api will be used
"localMode": true,
# Decide wheather test repositories should be generated as well. If set to false there will only be generated one code repository for each learner
"createTestRepository": true,
# Decide wheather the repositories should be randomized using the *variationsConfig.json*
"variateRepositories": true,
# Decide wheather the existing solution should be deleted using the SolutionDeleter
"deleteSolution": false,
# Activate warnings which will warn you if there are suspicious variable values remaining after variable placeholders have been replaced
"activateVariableValueWarnings": true,
# Define the number of concurrent repository generation processes. Keep in mind that high numbers can overload the Gitlab server if localMode is set to false
"maxConcurrentWorkers": 1
# Optional flag: set the logging level. Valid values are "debug", "info", "warn", "error" (case insensitive). Default value is "info".
"globalLogLevel": "debug"
},
"repository": {
# The Name of the repositories. Multiple repositories will be named <repositoryName>_group_<uuid>, <repositoryName>_tests_group_<uuid> ...
"repositoryName": "st2-praktikum",
# The number of repositories which will be created. Only relevant if there were no repositoryMembers defined
"repositoryCount": 0,
# The user names of the members which get access to repositories
"repositoryMembers": [
["st2-praktikum"]
]
},
"local": {
# The file path to an origin repository which should be used for local testing
"originRepositoryFilePath": ""
},
"remote": {
# Id of the repository you want to clone
"originRepositoryId": 1012,
# The ids of the target groups where all repositories will be located
"codeRepositoryTargetGroupId": 161,
"testRepositoryTargetGroupId": 170,
# If set to true all existing repositories inside the defined groups will be deleted
"deleteExistingRepositories": false,
# Define wheather users are added as maintainers or as quests
"addUsersAsGuests": false
}
}
If localMode is set to true the application will only generate possible variable variations and randomize files based
on a folder which contains the origin repository. This folder should be located in the folder resources/test/input. If
the folder resources/test/input does not exist create it within the root folder of this tool or run the tool once in
test mode which will generate this folder automatically. This can be used to get an idea which repositories will result
based on the configs. The following example shows the location of the origin folder:
root_of_tool
- build
- node_modules
- src
- .gitignore
- .Readme
- resources
- test
- input
- origin-folder
- src
- .gitignore
- .Readme
If you dont want to copy the origin repository each time you want to test a new version specify the file path to the
origin repository in the config under local.originRepositoryFilePath.
Partially repository generation
While running the automated-repo-setup
in local mode you have the option to partially generate repositories.
To do so, just configure the repositoryConfig.json
* as such:
{
"general": {
"localMode": true
},
"local": {
"subsetPaths": [
"README.md",
"path/to/malfunction/file.eof"
]
}
}
*only partially shown
Start generation
Generated files are located under: resources/output/
File Assignment
Although code and test files are separeted into two repositories the exercise only consists of one repository called the
origin. It would be really troublesome if you would have to update two repositories all the time while creating a new
exercise. Because of that there has to be a way to determine wheather a file has to be copied to the code project, the
test project or both. If you want some files to only be copied to a specific repository you can express this behaviour
in the filename.
- If the filename contains the string _coderepo the file will only be copied to the code repository.
- If the filename contains the string _testrepo the file will only be copied to the test repository.
- If the filename contains the string _norepo the file will not be copied to the repositories. This can be used to
store config files from this tool directly in the origin repository.
- If the filename contains none of those the file will be copied to both repositories.
File Converter
If you want to convert or manipulate certain repository files during the repository generation process File
Converters (File Manipulators) can be used. Currently there is only one type of File Manipulator available.
Additional converters can be easily added by extending the codebase of the Divekit. The already existing File
Manipulartor is called UmletFileManipulator. This manupulator is used to convert the individualized xml
representations of Umlet diagrams to image formats. This convert step can not be skipped
because it is not possible to replace variables in image representations of umlet diagrams. Therefore the process of
individualizing UML diagrams created with umlet is as follows:
UML diagram with placeholder variables (xml) -> UML diagram with already replaced content (xml) -> UML diagram with
already replaced content (image file format)
Test Overview
To give an overview on passed and failed tests of a repository a test overview page will be generated using the project
report-mapper and report-visualizer. The tools are called within the "
.gitlab-ci.yml" file in the deploy stage.
Repository Overview
If you have the generation of the overview table enabled in the repositoryConfig the destination and the name of the
overview table can be defined in the file repositoryConfig as well:
{
"overview": {
"generateOverview": true,
"overviewRepositoryId": 1018,
"overviewFileName": "st2-praktikum"
}
}
Given the config shown above a markdown file will be generated which includes a summary of all generated repositories
and their members. After that the file will be uploaded to the configured repository:
Solution Deletion
If you want solutions which are contained in your origin project to be removed while creating the code and test
repositories enable solution deletion in the repositoryConfig. The originRepositoryConfig specifies the keywords
which are used to either
- delete a file
- delete a paragraph
- replace a paragraph
This can be shown best with an example:
// TODO calculate the sum of number 1 and number 2 and return the result
public static int sumInt(int number1, int number2) {
//unsup
return number1 + number2;
//unsup
}
// TODO calculate the product of number 1 and number 2 and return the result
public static int multiplyInt(int number1, int number2) {
//delete
return number1 * number2;
//delete
}
will be changed to:
// TODO calculate the sum of number 1 and number 2 and return the result
public static int sumInt(int number1, int number2) {
throw new UnsupportedOperationException();
}
// TODO calculate the product of number 1 and number 2 and return the result
public static int multiplyInt(int number1, int number2) {
}
The corresponding config entry in the originRepositoryConfig would be:
{
"solutionDeletion": {
"deleteFileKey": "//deleteFile",
"deleteParagraphKey": "//delete",
"replaceMap": {
"//unsup": "throw new UnsupportedOperationException();"
}
}
}
A file containing the string “//deleteFile” would be deleted.
Individualization
If you want your project to be randomized slightly use the configuration files variationsConfig.json, *
variableExtensionsConfig* and relationsConfig to create variables. Variables can be referenced later by their name
encapsulated in configured signs. e.g.: $ThisIsAVariable$.
Variable Generation
There are three types of variables:
Object Variables
Object Variables are used to randomize Entities and Value Objects. Such variables are created by defining one or
multiple ids and an array of possible object variations. Object variations can contain attributes which will later be
transformed into a variable. An example attribute could be Class which contains the class name of an entity. Keep in
mind that attributes can not only be limited to a single primitive value but can only be expressed as a new object
inside the json. The following json shows a possible declaration of two object variations inside the variationsConfig:
{
"ids": "Vehicle",
"objectVariations": [
{
"id": "Car",
"Class": "Car",
"RepoClass": "CarRepository",
"SetToOne": "setCar",
"SetToMany": "setCars"
},
{
"id": "Truck",
"Class": "Truck",
"RepoClass": "TruckRepository",
"SetToOne": "setTruck",
"SetToMany": "setTrucks"
},
{
"id": "Train",
"Class": "Train",
"RepoClass": "TrainRepository",
"SetToOne": "setTrain",
"SetToMany": "setTrains"
}
],
"variableExtensions": [
"Getter"
]
},
{
"ids": ["Wheel1", "Wheel2"],
"objectVariations": [
{
"id": "FrontWheel",
"Class": "FrontWheel",
"RepoClass": "FrontWheelRepository",
"SetToOne": "setFrontWheel",
"SetToMany": "setFrontWheels"
},
{
"id": "BackWheel",
"Class": "BackWheel",
"RepoClass": "BackWheelRepository",
"SetToOne": "setBackWheel",
"SetToMany": "setBackWheels"
}
],
"variableExtensions": ["Getter"]
}
The defined object variations are now randomly assigned to the variables Vehicle, Wheel1 and Wheel2. The following
dictionary shows variables which result from above declaration:
VehicleClass: 'Truck',
VehicleRepoClass: 'TruckRepository',
VehicleGetToOne: 'getTruck',
VehicleGetToMany: 'getTrucks',
VehicleSetToOne: 'setTruck',
VehicleSetToMany: 'setTrucks',
Wheel1Class: 'Backwheel',
Wheel1RepoClass: 'BackwheelRepository',
Wheel1GetToOne: 'getBackWheel',
Wheel1GetToMany: 'getBackWheels',
Wheel1SetToOne: 'setBackWheel',
Wheel1SetToMany: 'setBackWheels',
Wheel2Class: 'FrontWheel',
Wheel2RepoClass: 'FrontWheelRepository',
Wheel2GetToOne: 'getFrontWheel',
Wheel2GetToMany: 'getFrontWheels',
Wheel2SetToOne: 'setFrontWheel',
Wheel2SetToMany: 'setFrontWheels'
In the example above you can see that some variables could be derived from already existing variables. The setter
variables are a perfect example for this. Such variables can also be defined through variable extensions. This is done
for the getter variables in the example. Two steps are required to define such derived variables:
- Define a rule for a variable extension in the config variableExtensionsConfig.json:
{
"id": "Getter",
"variableExtensions": {
"GetToOne": {
"preValue": "get",
"value": "CLASS",
"postValue": "",
"modifier": "NONE"
},
"GetToMany": {
"preValue": "get",
"value": "PLURAL",
"postValue": "",
"modifier": "NONE"
}
}
}
The value attribute references an already existing variable which is modified through the given modifier. Valid
modifiers can for example convert the given variable to an all lower case variant.
The resulting value is then concatenated with the preValue and postValue like so: preValue + modifier(value) +
postValue.
- Define a certain variable extension for an object by adding the id of the variable extension to the list of variable
extensions of an object (see example above).
Relation Variables
Relation Variables are used to randomize relations between entities. They are defined by declaring an array of *
relationships* and an array of relationObjects inside the variationsConfig. Both arrays must be of equal length
because each set of relationObjects will be assigned to an relationShip.
In order to define a relationShip you have to provide an id and a reference to an relationShip type. These types are
defined in the file relationsConfig and can contain any kind of attributes:
{
"id": "OneToOne",
"Umlet": "lt=-\nm1=1\nm2=1",
"Short": "1 - 1",
"Description": "one to one"
}
In order to define a set of relationObjects you have to provide an id and two object references. The following json
shows an example definition for relations:
{
"relationShips": [
{
"id": "Rel1",
"relationType": "OneToOne"
},
{
"id": "Rel2",
"relationType": "OneToMany"
}
],
"relationObjects": [
{
"id": "RelVehicleWheel1",
"Obj1": "Vehicle",
"Obj2": "Wheel1"
},
{
"id": "RelVehicleWheel2",
"Obj1": "Vehicle",
"Obj2": "Wheel2"
}
]
}
For each relationship two kind of variables will be generated.
One kind of variable will clarify which objects belong to a certain relationship. These variables will start with for
example Rel1 as defined in the section relationShips.
Another kind of variable will clarify which relationship belongs to a set of objects. These variables will start with
for example RelVehicleWheel1 as defined in the section relationObjects.
For each of these two kinds a set of variables will be gernated. The first set contains attributes of the relation types
defined in the relationsConfig. The other set contains attributes of the objects defined in the variationsConfig.
The following json shows a set of variables which will be generated for a single relationship:
Rel1_Umlet: 'lt=-\nm1=1\nm2=1',
Rel1_Short: '1 - 1',
Rel1_Description: 'one to one',
Rel1_Obj1Class: 'Truck',
Rel1_Obj1RepoClass: 'TruckRepository',
Rel1_Obj1GetToOne: 'getTruck',
Rel1_Obj1GetToMany: 'getTrucks',
Rel1_Obj1SetToOne: 'setTruck',
Rel1_Obj1SetToMany: 'setTrucks',
Rel1_Obj2Class: 'Backwheel',
Rel1_Obj2RepoClass: 'BackwheelRepository',
Rel1_Obj2GetToOne: 'getBackWheel',
Rel1_Obj2GetToMany: 'getBackWheels',
Rel1_Obj2SetToOne: 'setBackWheel',
Rel1_Obj2SetToMany: 'setBackWheels',
RelVehicleWheel1_Umlet: 'lt=-\nm1=1\nm2=1',
RelVehicleWheel1_Short: '1 - 1',
RelVehicleWheel1_Description: 'one to one',
RelVehicleWheel1_Obj1Class: 'Truck',
RelVehicleWheel1_Obj1RepoClass: 'TruckRepository',
RelVehicleWheel1_Obj1GetToOne: 'getTruck',
RelVehicleWheel1_Obj1GetToMany: 'getTrucks',
RelVehicleWheel1_Obj1SetToOne: 'setTruck',
RelVehicleWheel1_Obj1SetToMany: 'setTrucks',
RelVehicleWheel1_Obj2Class: 'Backwheel',
RelVehicleWheel1_Obj2RepoClass: 'BackwheelRepository',
RelVehicleWheel1_Obj2GetToOne: 'getBackWheel',
RelVehicleWheel1_Obj2GetToMany: 'getBackWheels',
RelVehicleWheel1_Obj2SetToOne: 'setBackWheel',
RelVehicleWheel1_Obj2SetToMany: 'setBackWheels',
Logic variables
Logic Variables are used to randomize logic elements of an exercise. The idea behind this concept is that you can
define multiple groups of business logic, but only one group of business logic is assigned to each individual exercise.
Logic variables can also be used to define text which decribes a certain business logic. Here is an example for the
definition of logic variables:
{
"id": "VehicleLogic",
"logicVariations": [
{
"id": "VehicleCrash",
"Description": "Keep in mind that this text is just an example. \nThis is a new line"
},
{
"id": "VehicleShop",
"Description": "The Vehicle Shop exercise was selected"
}
]
}
Above example will generate only one variable which is called VehicleLogicDescription. The interesting part of the
logic variations are the ids. If you add an underscore followed by such an id to the end of a file this file is
only inserted into an individual repository if the said id was selected during the randomization.
e.g.: The file VehicleCrashTest_VehicleCrash.java is only inserted if the logic VehicleCrash was selected. The file
VehicleShopTest_VehicleShop.java is only inserted if the logic VehicleShop was selected.
This can be used to dynamically insert certain test classes which test a specific business logic. If a certain test
class was not inserted to an individual repository the one who solves this exercise does not have to implement the
corresponding business logic.
Variable Post Processing
Often variable values are needed not only in capital letters but also in lower case format. Therefore for each generated
variable there will be three different types generated:
The first type is the variable itselt without further changes e.g.: VehicleClass -> MonsterTruck
The second type sets the first char to lower case e.g.: vehicleClass -> monsterTruck
The third type sets all chars to lower case e.g.: vehicleclass -> monstertruck
Variable Replacement
In the process of repository individualization all defined variables will be replaced in all the origin repository files
with their corresponding value. Typically every variable which should be replaced is decorated with a specific string at
the start and the end of the variable e.g: $VehicleClass$ or xxxVehicleClassxxx. This string helps identifying
variables. If needed this string can be set to an empty string. In this case the variable name can be inserted in
specific files without futher decoration. This can lead to problems in terms of variable replacement so that the Divekit
will take certain measures to ensure that all variables are replaced correctly. This decoration string can be configured
in the originRepositoryConfig:
{
"variables": {
"variableDelimeter": "$"
}
}
Variable Value Warnings
If this feature is activated within the repositoryConfig the tool will spit out warnings which will inform you if
there are suspicious variable values remaining after variable placeholders have been replaced. If for example a learner
has to solve an exercise which contains Trucks instead of Cars (see config above) then the solution of this leaner
should not contain variable values like “Car”, “CarRepository”, “setCar” or “setCars”. In the originRepositoryConfig
you can define a whitelist of file types which should be included in the warning process.
Additionally an ignoreList can be configured. If a variable value is contained in one of the defined values inside
the ignoreList this specific variable value will not trigger a warning. In addition, the ignoreFileList can
contain filenames which should be completely excluded from the warning process.
The following json is an example for the discussed configurable options:
{
"warnings": {
"variableValueWarnings": {
"typeWhiteList": [
"json",
"java",
"md"
],
"ignoreList": [
"name",
"type"
],
"ignoreFileList": [
"individualizationCheck_testrepo.json",
"variationsConfig_testrepo.json",
"IndividualizationTest_testrepo.java"
]
}
}
}
Individual Repository Persist
If you run the tool the default behaviour is that it will generate individual variables for each repository which is
specified in the repositoryConfig. If you want to reuse already generated variables you can set "
useSavedIndividualRepositories" to “true” and define a file name under “savedIndividualRepositoriesFileName”. The file
name is relative to the folder “resources/individual_repositories”. These options are defined in the repositoryConfig:
{
"individualRepositoryPersist": {
"useSavedIndividualRepositories": true,
"savedIndividualRepositoriesFileName": "individual_repositories_22-06-2021 12-58-31.json"
}
}
A single entry in such an individual repositories file can be edited with a normal text editor and could look like this:
{
"id": "67e6be38-ae36-4fbf-9d03-0993d97f7559",
"members": [
"user1"
],
"individualSelectionCollection": {
"individualObjectSelection": {
"Vehicle": "Truck",
"Wheel1": "BackWheel",
"Wheel2": "FrontWheel"
},
"individualRelationSelection": {
"Rel1": "RelVehicleWheel2",
"Rel2": "RelVehicleWheel1"
},
"individualLogicSelection": {
"VehicleLogic": "VehicleCrash"
}
}
}
Components
The component diagram above shows the components of the Divekit which are used in the process of generating and
individualizing repositories. In the following the repository generation process will be explained step by step and the
components relevant in each step are described:
The Repository Creator delegates most of the tasks involved in the repository generation process to other
components. Before repositories are generated the Repository Creator calls the Repository Adapter to prepare the
environment. This includes for example creating empty folders for repositories or deleting previous data which is
contained in the destination folder. A Repository Adapter functions like an interface to the environment in which
new repositories are being generated. At the moment there are two kinds of Repository Adapters: One for the local
file system and one for Gitlab.
The Content Retriever retrieves all files from the configured origin repository. In order to access the origin
repository the component will use a Repository Adapter. If solution deletion is activated the solution which is
contained inside the origin repository will be deleted inside the retrived origin files (not in the origin
repository).
For each configured repository or learner a specific configuration is generated by the Individual Repository
Manager. This configuration is used by other components while generating repositories and contains for example a
unique id and the usernames of learners. If individualization is activated for each configuration specific variations
and corresponding variables are generated by the Variation Generator. These variations and variables will also be
contained in the seperate configurations which are generated by the Individual Repository Manager.
For each repository configuration generated by the Individual Repository Manager in the previvous step a Content
Provider is instantiated. After varying the content by using the randomly generated variations from the previous
step the defined File Manipulators (File Converters) are executed. Finally the resulting files are pushed to a new
repository using a Repository Adapter.
After all Content Providers are finished with generating each corresponding repository the Overview Generator
collects basic information from the Content Providers and generates an overview of all links leading to Code
Projects, Test Projects and Test Pages.
The following table lists the relevant packages inside the codebase for each component:
Component | Relevant Packages |
---|
Repository Creator | repository_creation |
Individual Repository Manager | repository_creation |
Variation Generator | content_variation |
Content Retriever | content_manager, solution_deletion |
Content Provider | content_manager, content_variation, file_manipulator |
Overview Generator | generate_overview |
Repository Adapter | repository_adapter |
Design-Decisions
Design-Decision | Explanation |
---|
Typescript chosen as programming language | Easy handling of dynamic json structures, Good API support for Gitlab, Platform independent, Can be executed locally with nodejs |
5 - cli
CLI to simplify and orchestrate the usage of multiple divekit tools.
The following documentation is more a concept than a description of the finished application.
Getting started
Prerequisites
usage
npm i @divekit/cli -g # install
mkdir st-2042 # create empty project directory
cd st-2042
divekit init -g 152 -c 14 # initializes in GitLab Dir with groupId 152 and with config base from project in GitLab dir 14
file structure
st-2042
├── .divekit # all configs, tools etc. needed for this project and the divekit-cli
| ├── config
| | ├── automated-repo-setup
| | ├── ...
| | └── basic.yml # name, milestone data, ...
| └── tools
| ├── access-manager
| └── ...
├── milestone1
| ├── config.yml # milestone specific domain-config
| └── origin-repo
| └── ...
├── milestone2
└── ...
basic.yml example
name: ST1-2022
rootStaffGroupId: 42
rootStudentGroupId: 43
overviewGroupId: 44
milestones:
m1:
groupId: 54
start: 20.08.2023
end: 04.09.2023
codeRepoGroupId: 142
testRepoGroupId: 143
m2:
groupId: 67
start: 07.09.2023
end: 27.09.2023
codeRepoGroupId: 152
testRepoGroupId: 153
# ...
Functionality
divekit init
Initialize a new divekit project, done once per semester and module. Must be used in an empty directory otherwise it only checks and installs existing dependencies needed for tools. An example for a generated file structure can be seen above.
Options
-w / --wizard
install via wizard which asks and explains all parameters and gives options if available-g / --groupId
groupId for the corresponding GitLab directory (should be empty as well)-n / --name
set a name for the project (Defaults to name of parent directory)-c / --config <groupId>
copy a previous used config from given project (Nice to have: also works on old projects)
errors
- exits if node, python or pip are not available
Example usage
Creating a new project with the help of the wizard.
mkdir st1-2023
divekit init --wizard
$ "Please enter the GitLab GroupId where all Staff content will be managed (should be empty):"
$ 150 # <user_input>
$ "Please enter the GitLab GroupId where all student repositories will be published:"
$ 151 # <user_input>
$ "Please enter a name for the project (default: st1-2023)"
$ # <user_input>
$ "Do you want to use a already existing configuration, then please enter a groupId for the corresponding project otherwise a basic config will be initialized"
$ 42 # <user_input>
$ "Divekit initialized successfully"
# Result: Directory .divekit created locally and pushed to GitLab
divekit milestone
manage the creation, testing, publication and more for a milestone.
if not otherwise configured naming, config etc. will be copied from the last milestone
Options
-w / --wizard
create the next milestone via wizard which asks and explains all parameters and gives options if available (for more information see the example below)-n / --name <string>
set a custom name for the milestone (default to milestone)-t / --template <groupId>
create the next milestone based of an already existing origin repo-f / --fresh
while creating a new milestone ignore already existing and their configuration-i / --id <number>
specify milestoneId (default start with 1 or last existing milestone +1)-l / --local <number>
create a given number student-repos locally in demo-folder-r / --random <none || all | roundrobin>
configure randomization. None at all, Everything (like normal student repo generation) or roundrobin which tries to create uniform distribution of all variations-p / --publish
creates and publishes all repos for students based on origin repo-d / --demo <repoCount>
create a given count of test repositories
Example usage
Create the first milestone with the help of the wizard
divekit milestone --wizard
$ "creating the first milestone of this project"
$ "please enter a number for the first milestone (default: 1)"
$ # <user_input>
$ "Do you want to use a already existing milestone as template?"
$ "Please enter the groupId otherwise skip with Enter"
$ 47 # <user_input>
$ "Milestone created - pls configure the origin repository"
# Result:
# - milestone1 directory created based on template repo
# - updated configs with new milestone and push everything to GitLab
# - also created directories for code/test repositories in GitLab.
Create further milestone with the help of the wizard
divekit milestone --wizard
$ "creating a milestone for st1-2023"
$ "please number for the milestone (default: 2)"
$ # <user_input>
$ "Do you want to use a already existing milestone as template?"
$ "Please enter the groupId otherwise the milestone2 will be based on milestone1"
$ # <user_input>
$ "Milestone created - pls configure the origin repository"
# Result:
# - milestone2 directory created based on milestone1 repo
# - updated configs with new milestone and push everything to GitLab
# - also created directories for code/test repositories for milestone2 in GitLab.
divekit patch
Edit the current or given milestone.
Options
-m / --milestone <numer>
number of milestone you want to edit (default: latest existing milestone)-g / --g <[groupId]>
list of groupIds where to publish edits (default: student repositories of the milestone)-f / --files <[names]>
list of files to edit/create-c / --commit <msg>
: custom commit msg (default: “fix 42”)-d / --debug
: sets loglevel to debug
Use Cases and other ideas - which should be considered
- create repos for latecomers, e.g. because they passed the previous milestone after a discussion or forgot to register.
- divide the config structure between technical and domain configuration
13 - Report Mapper
The report-mapper is an integral part of the GitLab-CI/CD-pipeline for all test repositories. The report-mapper ensures that various inputs from Surefire-Reports, PMD and Checkstyle codestyle checks are converted into a uniform format.
The produced result is an XML file and subsequently used by the
report-visualizer for a readable output.
Architecture overview
Usage in the pipeline
For the usage in the pipeline you just need node
as prerequisite and then install and use the report-mapper as following:
npm install @divekit/report-mapper
npx report-mapper
Keep in mind, to provide needed input-data based on your configuration.
Complete sample test-repo pipeline-script
image: maven:3-jdk-11
stages:
- build
- deploy
build: # Build test reports
stage: build
script:
- chmod ugo+x ./setup-test-environment.sh
- ./setup-test-environment.sh # copy code from code repo and ensure that test are NOT overridden
- mvn pmd:pmd # build clean code report
- mvn verify -fn # always return status code 0 => Continue with the next stage
allow_failure: true
artifacts: # keep reports for the next stage
paths:
- target/pmd.xml
- target/surefire-reports/TEST-*.xml
pages: # gather reports and visualize via gitlab-pages
image: node:latest
stage: deploy
script:
- npm install @divekit/report-mapper
- npx report-mapper # run generate unified.xml file
- npm install @divekit/report-visualizer
- npx report-visualizer --title $CI_PROJECT_NAME # generate page
artifacts:
paths:
- public
only:
- master
configuration
The report mapper is configurable in two main ways:
- By defining which inputs are expected and therefore should be computed.
This is configurable via parameters. You can choose from the following: pmd, checkstyle* and surefire.
If none are provided it defaults to surefire and pmd.
npx report-mapper [surefire pmd checkstyle]
- The second option is specific to PMD. PMD for itself has a configuration-file
pmd-ruleset.xml
which configures
which PMD rules should be checked. The report mapper also reads from this file and will design the
output based on available rules.
Note: The assignment of PMD rules to clean code and solid principles is as of now hardcoded and not configurable.
*The checkstyle-mapper is currently not included in the testing and therefore should be used with caution.
Example simplified pmd-ruleset.xml
:
<?xml version="1.0"?>
<ruleset name="Custom Rules"
xmlns="http://pmd.sourceforge.net/ruleset/2.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://pmd.sourceforge.net/ruleset/2.0.0 https://pmd.sourceforge.io/ruleset_2_0_0.xsd">
<description>
Clean Code Rules
</description>
<!-- :::::: CLEAN CODE :::::: -->
<!-- Naming rules -->
<rule ref="category/java/codestyle.xml/ClassNamingConventions"/>
<rule ref="category/java/codestyle.xml/FieldNamingConventions"/>
<!-- :::::::: SOLID :::::::: -->
<!-- SRP (Single Responsibility Principle) rules -->
<rule ref="category/java/design.xml/TooManyFields"/> <!-- default 15 fields -->
<rule ref="category/java/design.xml/TooManyMethods"> <!-- default is 10 methods -->
<properties>
<property name="maxmethods" value="15" />
</properties>
</rule>
</ruleset>
Getting started
Install
Clone the repository and install everything necessary:
# HTTP
git clone https://github.com/divekit/divekit-report-mapper.git
# SSH
git clone git@github.com:divekit/divekit-report-mapper.git
cd ./divekit-report-mapper
npm ci # install all dependencies
npm test # check that everything works as intended
The input data should be provided in the following structure:
divekit-report-mapper
├── target
| ├── surefire-reports
| | ├── fileForTestGroupA.xml
| | ├── fileForTestGroupB.xml
| | └── ...
| ├── checkstyle-result.xml
| └── pmd.xml
└── ...
You can find some examples for valid and invalid inputs in the tests: src/test/resources
Understand the Output
The result from the divekit-report-mapper is a XML-File (target/unified.xml
).
It contains the result of all inputs sources in a uniform format. This also includes errors if some or all inputs
provided invalid or unexpected data.
Example with only valid data:
<?xml version="1.0" encoding="UTF-8"?>
<suites>
<testsuite xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation=""
name="E2CleanCodeSolidManualTest" failures="0" type="JUnit" status="failed">
<testcase name="testCleanCodeAndSolidReview" status="failed" hidden="false">
<error message="-%20break%20pipeline%20%3C--%0A" type="java.lang.Exception"><![CDATA[java.lang.Exception:
- break pipeline <--
at thkoeln.st.st2praktikum.exercise1.E2CleanCodeSolidManualTest.testCleanCodeAndSolidReview(E2CleanCodeSolidManualTest.java:13)]]>
</error>
</testcase>
</testsuite>
<testsuite name="Clean-Code-Principles by PMD" status="failed" type="CleanCode">
<testcase name="Keep it simple, stupid" status="passed" hidden="false"></testcase>
<testcase name="Meaningful names" status="failed" hidden="false">
<error type="LocalVariableNamingConventions" location="Line: 90 - 90 Column: 13 - 22"
file="C:\work\gitlab-repos\ST2MS0_tests_group_d5535b06-ae29-4668-8ad9-bd23b4cc5218\src\main\java\thkoeln\st\st2praktikum\bad_stuff\Robot.java"
message="The local variable name 'snake_case' doesn't match '[a-z][a-zA-Z0-9]*'"></error>
</testcase>
</testsuite>
</suites>
For further examples see tests src/test/resources
.
Deployment
All pipeline scripts normally use the latest version from
npmjs.com.
The repository is set up with three different GitHub Actions workflows wich trigger
on pushes to the branches main
, stage
and development
.
- main: Build, run tests and publish new npm package. Fails if:
build/tests fail, the version is a beta version or
the version has not been updated
- stage: same as main but the version must be a beta-version and the package is
tagged as beta
- development: Build and run all tests
Version
Complete packages available at npmjs.com.
The versioning is mostly based on semantic versioning.
1.1.2
- fixed a bug which caused packages to fail if there were build through the automated GitHub Actions workflow
1.1.1
- moved docs from readme to divekit-docs
- add continues delivery pipeline
- switch to eslint
- add configurability of pmd principles
- add surefire parsing error flag
- update scripts according to new report-visualizer naming
1.0.8
- Parameters processing added, which allow a restriction of the used mappers
- Error handling: If a mapper does not deliver a valid result, an error is indicated in the unified.xml.
14 - Report Visualizer
The divekit-report-visualizer creates a website based on a xml file, which visually prepares the content or the report and can be made available via GitHub/Lab-Pages, for example.
Architecture overview
Usage in the pipeline
For the usage in the pipeline you just need node
as prerequisite and provide the input-data: target/unified.xml
.
Install and use the report-visualizer as following:
npm install @divekit/report-visualizer
npx report-visualizer --title PROJECT_NAME
Complete sample test-repo pipeline-script
image: maven:3-jdk-11
stages:
- build
- deploy
build: # Build test reports
stage: build
script:
- chmod ugo+x ./setup-test-environment.sh
- ./setup-test-environment.sh # copy code from code repo and ensure that test are NOT overridden
- mvn pmd:pmd # build clean code report
- mvn verify -fn # always return status code 0 => Continue with the next stage
allow_failure: true
artifacts: # keep reports for the next stage
paths:
- target/pmd.xml
- target/surefire-reports/TEST-*.xml
pages: # gather reports and visualize via gitlab-pages
image: node:latest
stage: deploy
script:
- npm install @divekit/report-mapper
- npx report-mapper # run generate unified.xml file
- npm install @divekit/report-visualizer
- npx report-visualizer --title $CI_PROJECT_NAME # generate page
artifacts:
paths:
- public
only:
- master
Getting started
Install
Clone the repository and install everything necessary:
# HTTP
git clone https://github.com/divekit/divekit-report-visualizer.git
# SSH
git clone git@github.com:divekit/divekit-report-visualizer.git
cd ./divekit-report-visualizer
npm ci # install all dependencies
The input data should be provided in the following structure:
divekit-report-visualizer
├── target
| └── unified.xml
└── ...
Run it
Directly with provided input target/unified.xml
node bin/report-visualizer
Use predefined input assets/xml-examples/unified.xml
Or use divekit-report-mapper
result*
*Requirement is that the divekit-report-visualizer
is located in the same directory as the divekit-report-mapper
.
Output (GitLab Pages)
Output in /public
directory. Which is used for GitLab-pages or could be mounted anywhere.
divekit-report-visualizer
├── target
| └── unified.xml
├── public
| ├── index.html
| └── style.css
└── ...
The following picture shows an example output with passed test (green), test failures (orange),
errors (red) and a note (gray).
Deployment
Currently, completely manually. In the future done similar to report-mapper
All pipeline scripts normally use the latest version from
npmjs.com.
Version
Complete packages available at npmjs.com.
The versioning is mostly based on semantic versioning.
1.0.3
- Updating naming: form
divekit-new-test-page-generator
to divekit-report-visualizer
1.0.2
- Added hidden metadata in the header indicating the number of failed tests.
- Added possibility to pass a special ‘NoteTest’ test case which is displayed separately.
- Updated the error message for generation problems so that it is displayed even if only parts of the test page
could not be generated.
- Fixed an error where the test page could not be generated if there was no input.