MPS 2024.1 Help

Testing languages

Testing languages

Introduction

Testing is an essential part of language designer's work. To be of any good MPS has to provide testing facilities both for BaseLanguage code and for languages. While the jetbrains.mps.baselanguage.unitTest language enables JUnit-like unit tests to test BaseLanguage code, the Language test language jetbrains.mps.lang.test provides a useful interface for creating language tests.

Quick navigation table

Different aspects of language definitions are tested with different means:

Language definition aspects

The way to test

Intentions

Actions

Side-transforms

Editor

ActionMaps

KeyMaps

Use the jetbrains.mps.lang.test language to create EditorTestCases. You set the stage by providing an initial piece of code, define a set of editing actions to perform against the initial code and also provide an expected outcome as another piece of code. Any differences between the expected and real output of the test will be reported as errors.

For more information, refer to the Editor Tests section.

Constraints

Scopes

Type-system

Dataflow

Use the jetbrains.mps.lang.test language to create NodesTestCases. In these test cases write snippets of "correct" code and ensure no error or warning is reported on them. Similarly, write "invalid" pieces of code and assert that an error or a warning is reported in the correct node.

For more information, refer to the Nodes Tests section.

Generator

TextGen

Use the language to create GeneratorTests. These let you transform test models and check whether the generated code matches the expected outcome.

For more information, refer to the Generator Tests section.

There is currently no built-in testing facility for these aspects. There are a few practices that have worked for us over time:

  • Perhaps the most reasonable way to check the generation process is by generating models, for which we already know the correct generation result, and then comparing the generated output with the expected one. For example, if your generated code is stored in a VCS, you could check for differences after each run of the tests.

  • You may also consider providing code snippets that may represent corner cases for the generator and check whether the generator successfully generates output from them, or whether it fails.

  • Compiling and running the generated code may also increase your confidence about the correctness of your generator.

Migrations

Use the jetbrains.mps.lang.test language to create MigrationTestCases. In these test cases write pieces of code to run migration on them.

For more information, refer to Migration Tests section.

Tests creation

There are two options to add test models into your projects.

1. Create a Test aspect in your language

This is easier to setup, but can only contain tests that do not need to run in a newly started MPS instance. So typically can hold plain baselanguage unit tests. To create the Test aspect, right-click the language node and choose chose New->Test Aspect.

T2.png

Now you can start creating unit tests in the Test aspect.

T1.png

Right-clicking on the Test aspect will give you the option to run all tests. The test report will then show up in a Run panel at the bottom of the screen.

2. Create a test model

This option gives you more flexibility. Create a test model, either in a new or an existing solution. Make sure the model's stereotype is set to tests.

T3.png

Open the model's properties and add the jetbrains.mps.baselanguage.unitTest language in order to be able to create unit tests. Add the jetbrains.mps.lang.test language in order to create language (node) tests.

T4.png

Additionally, you need to make sure the solution containing your test model has a kind set - typically choose Other, if you do not need either of the two other options (Core plugin or Editor plugin).

T8.png

Right-clicking on the model allows you to create new unit or language tests. See all the root concepts that are available:

T5.png
T6.png

Unit testing with BTestCase

As for BaseLanguage Test Case, represents a unit test written in baseLanguage. Those are familiar with JUnit  will be quickly at home.

T7.png

A BTestCase has four sections - one to specify test members (fields), which are reused by test methods, one to specify initialization code, one for clean up code and finally a section for the actual test methods. The language also provides a couple of handy assertion statements, which code completion reveals.

TestInfo

Some tests, like for example, Node tests or Editor tests need to access the MPS project they are part of. MPS can locate the current project by inself in most cases, but an explicit project location can be provided inside a TestInfo node in the root of your test model.. This applies to all modes of executing the JUnit tests:

  • Running from the IDE, both in-process and out-of-process: it is assumed the project to open is the one currently open.

  • Running from the <launchtests> task: the project path can be specified as an additional option "project path" of the task. If left unspecified, the ${basedir} is used, which corresponds to the home directory of the current project.

  • For a special case where neither of the above can be used, there is a possibility to specify the project location via a system property: -Dmps.test.project.path.

testInfox1.png

When providing a custom TestInfo node, the Project path attribute in particular is worth your attention. This is where you need to provide a path to the project root, either as an absolute or relative path, or as a reference to a Path Variable defined in MPS (Project Settings -> Path Variables).

pathVariables1.png

To make the path variable available in Ant scripts, define it in your build file with the mps.macro.  prefix (refer to example below).

Testing aspects of language definitions

Node tests

A NodesTestCase contains three sections:

T10.png

The first one contains code that should be verified. The section for test methods may contain baseLanguage code that further investigates nodes specified in the first section. The utility methods section may hold reusable baseLanguage code, typically invoked from the test methods.

Checking for correctness

To test that the type system correctly calculates types and that proper errors and warnings are reported, you write a piece of code in your desired language first. Then select the nodes, that you'd like to have tested for correctness and choose the Add Node Operations Test Annotation intention.

T11.png
T12.png
T13.png

This will annotate the code with a check attribute, which then can be made more concrete by setting a type of the check:

T14.png
T15.png

The for error messages option ensures that potential error messages inside the checked node get reported as test failures. So, in the given example, we are checking that there are no errors in the whole Script.

The other options allow you:

  • has error - ensures a particular error is reported on a node

  • has expected type - ensures the annotated node is expected to have the specified type

  • has info - ensures a particular info message is reported on a node

  • has type - ensures the annotated node has the specified type

  • has type in - ensures the annotated node has one of the specified types

  • has typesystem error - ensures a specified typesystem error is reported on the annotated node

  • has warning - ensures a particular warning is reported on a node

Checking for type system and data-flow errors and warnings

If, on the other hand, you want to test that a particular node is correctly reported by MPS as having an error or a warning, use the has error /has warning option.

T16.png
T17.png

This works for both warnings and errors. Multiple warnings and errors can be declared with a single annotation.

T26_1.png

You can even tie the check with the rule that you expect to report the error / warning. Press Alt+Enter  when with caret at the node and pick the Specify Rule References option:

T18.png
T19.png

An identifier of the rule has been added. You can navigate by Ctrl+Space  to the definition of the rule.

T29.png

When run, the test will check that the specified rule is really the one that reports the error.

Type-system specific options

The check command offers several options to test the calculated type of a node.

T100.png
T101.png

Multiple expectations can be combined conveniently:

T37.png

Testing scopes

The Scope Test Annotation allows the test to verify that the scoping rules bring the correct items into the applicable scope:

T102.png

The Inspector panel holds the list of expected items that must appear in the completion menu and that are valid targets for the annotated cell:

T103.png
T104.png

Test and utility methods

The test methods may refer to nodes in your tests through labels. You assign labels to nodes using intentions:

T32.png

The labels then become available in the test methods.

T33.png

Additional options defined in the Inspector

The Inspector tool window provides additional options:

  • Can execute-in-process - allows the author of the test to stop the test execution when the whole test suite is started to be run in-process. The test suit terminates with a TestSetNotToBeExecutedInProcessException thrown. This may be useful when the test, for example, could accidentally damage the existing project by touching other nodes.

  • Model access model - specifies the type of access control applied to the test:

    • command - the test will run wrapped in a command, so read/write access tto models is granted

    • none - no access control applied

    • read - the test will run wrapped in a read action, so reading models is allowed

    • unset - a maintenance value needed for migration that should not be used

T32.png

The labels then become available in the test methods.

T33.png

Editor tests

Editor tests allow you to test the dynamism of the editor - actions, intentions and substitutions.

T20.png

An empty editor test case needs a name, an optional description, setup the code as it should look  before an editor transformation, the code after the transformation (result) and finally the actual trigger that transforms the code in the code section.

T21.png

For example, a test that an IfStatement in BaseLanguage can have an "else" block added if the user types "else{" behind the closing brace would look as follows:

T21a.png

In the code section the jetbrains.mps.lang.test language gives you several options to invoke user-initiated actions:

  • type - insert some text at the current caret position

  • press keys - simulate pressing a key combination

  • invoke action - trigger an action

  • invoke intention - trigger an intention

  • invoke quick-fix XXX from YYY to the selected node - invokes a specified quick-fix named XXX to an error YYY from the intentions menu, fails if the quick-fix is not available in the menu

  • invoke quick-fix <the only one available> to the selected node - invokes the only quick-fix that is available in the intentions menu, fails if 0 or more than 1 quick-fixes are available

T22.png

Obviously you can combine the special test commands with the plain BaseLanguage code.

To mark the position of the caret in the code, use the appropriate intention with the caret placed at the desired position:

T23.png

The caret position can be specified in both the before and the after code:

T25.png

The cell editor annotation has extra properties to fine-tune the position of the caret in the annotated editor cell. These can be set in the Inspector panel.

Inspecting the editor state

Some editor tests may wish to inspect the state of the editor more thoroughly. The editor component expression gives you access to the editor component under the caret. You can inspect its state as well as modify it, like in these samples:

ec1.png
ec2.png
ec3.png
ec4.png

The is intention applicable expression let's you test, whether a particular intention can be invoked in the given editor context:

iiap.png

You can also get hold of the model and project using the model and project expressions, respectively.

Testing two-phase deletion

Two-phase deletion of nodes can be tested using the editor component expression as follows

EditorTestUtil.runWithTwoStepDeletion({ => invoke action -> Delete assert true DeletionApproverUtil.isApprovedForDeletion(editor component.getEditorContext(), editor component.getSelectedNode()); assert true editor component.getDeletionApprover().isApprovedForDeletion(editor component.findNodeCell(editor component.getSelectedNode())); invoke action -> Delete }, true);
  • EditorTestUtils.runWithTwoStepDeletion will create a local context with two-step deletion enabled. Remember that the user can turn two-phase deletion on and off at will, so this will ensure consistent environment for the tests. The second, boolean parameter to the method indicates whether two-phase deletion should be on or off.

  • DeletionApproverUtil.isApprovedForDeletion - retrieved the cell corresponding to the current node and tests is "approvedForDeletion" flag.

  • Alternatively use component.getDeletionApprover() to test the flag without the help of the utility class. You will have to find and provide the editor cell that should have the "approved for deletion" flag tested.

Migration tests

Migrations tests can be used to check that migration scripts produce expected results using specified input.

new-migration-test.png

To create a migration test case you should specify its name and the migration scripts to test. In many cases it should be enough to test individual migration scripts separately, but you can safely specify more than one migration script in a single test case, if you need to test how migrations interact with one another.

empty-migration-test.png

Additionally, migration test cases contain nodes to be passed into the migration process and those also nodes that are expected to come out as the ouptut of the migration.

input-output.png

When running, migration tests behave the following way:

  1. Input nodes are copied as roots into an empty module with single model.

  2. Migration scripts run on that module.

  3. Roots contained in that module after migration are compared with the expected ouput

  4. The check() method of the concerned migration(s) is invoked to ensure that it returns an empty list of problems

To simplify the process of writing migration tests, the expected output can be generated automatically from the input nodes using the currently deployed migration scripts. To do this, use the intention called 'Generate Output from Input'.

generate-output.png

Generator tests

Generators can be tested with generator tests. Their goal is to ensure that a generator, or set of generators, do their transformations as expected. Both in-process and out-of-process execution modes are supported from the IDE, as well as execution from MPS Ant build scripts. As with all tests in MPS the user specifies:

  1. the pre-conditions in form of input models

  2. the expected output of the generator in form of output models

  3. the set of generators to apply to the input models in form of an explicit generator plan or, if omitted, the implicit generator plan is used.

The jetbrains.mps.lang.test.generator allows you to create GeneratorTests. The jetbrains.mps.lang.modelapi language will let you create convenient model references using the model/name of the model/ syntax.

GT1.png

Notice that the structure of a generator test gives you a section called Arguments, where all the models need to be specified (input, expected output and optionally also models holding the generation plans), and a section called Assertions, where the desired transformations and matching are specified.

A failure to match the generator output with the expected output is presented to the user in the test report:

GT2.png

Model Match Options

The way the expected output is compared with the actual result of generation tests can be configured with Match Options.

MatchOptions.png

These match options can then be associated with assertions.

Reorder root nodes

Since the Project tool windows always lists root nodes ordered alphabetically, it hides the actual physical order of root nodes in the model. By default, the order of root nodes in the models matters when the models are being compared during generation tests. To make the tests pass, you can either change the Match options for your test or you can mannually re-order the root nodes in the output models to reflect the expectations you have from your generator.

ReorderNodes1.png


ReorderNodes2.png

Running the tests

Inside MPS

To run tests in a model, just right-click the model in the Project View panel and choose Run tests:

T39.png

If the model contains any of the jetbrains.mps.lang.test tests, a new instance of MPS is silently started in the background (that's why it takes quite some time to run these compared to plain baseLanguage unit tests) and the tests are executed in that new MPS instance. A new run configuration is created, which you can then re-use or customize:

testrunconfig.png

The Run configurations dialog gives you options to tune the performance of tests.

  • Override the default settings location - specify the directory to save the caches in. By default, MPS chooses the temp directory. The directory is cleared on every run.

  • Execute in the same process - to speed up testing tests can be run in a so-called in-process mode. It was designed specifically for tests, which need to have an MPS instance running. (For example, for the language type-system tests MPS should safely be able to check the types of nodes on the fly.)

    One way to run tests is to have a new MPS instance started in the background and run the tests in this instance. The second way, enabled by this checkbox, runs all tests in the same original MPS process, so no new instance needs to be created.

    When the option Execute in the same process is set, the test is executed in the current MPS environment. This is the default for:

    • Node tests

    • Editor tests

    To run tests in the original way (in a separate process) you should uncheck this option. This is the default for:

    • Migration tests

    • Generation tests

    • Unit tests (BTestCases and JUnit test cases)

    The test report is shown in the Run panel at the bottom of the screen:

  • The JUnit run configuration accepts plugins to deploy before running the tests. The user can provide a list of idea plugins to be deployed during the test execution. The before task 'Assemble Plugins' is available in the JUnit run configuration as well. It automatically builds the given plugins and copies the artifacts to the settings directory.

T41.png

From a build script

In order to have your generated build script offer the test target that you could use to run the tests using Ant, you need to import the jetbrains.mps.build.mps and jetbrains.mps.build.mps.tests languages into your build script, declare using the module-tests plugin and specify a test modules configuration.

testScript1.png

By default, JUnit tests in MPS generate the test reports in the "vintage" and "jupiter" formats. In addition to that the Open Test report format can be enabled explicitly with the create open test report option. If the option is set to true, report files named "junit-platform-events*-$BUILD_NAME$.xml" are created in the project directory.

To define a macro that Ant will pass to JUnit (e.g. for use in TestInfo roots in your tests), prefix it with mps.macro.:

image2016-10-13-17-29-27.png
Last modified: 22 July 2024