Note: The detailed procedure to produce test documents from Jira is available in the following page DM Test Documents Procedure.

The data management test approach is based on the following principles:

  • The tests shall be easy to write and maintain
  • The tests can be executed multiple times, depending on the scheduled milestones and needs
  • The test conditions shall be well identified to i) make the test repeatable ii) make the behavior and the results as close as possible to operations

The documents involved are:

  • The Test Specification: it contains the list of test cases written on a specific DM component, plus some descriptive parts on the test activity involved for that component.
  • The Test Report: it contains some descriptive part on the test execution, a test case runs a summary report and a detailed report on the test is done. With the Jira Test Management introduction, it has been proposed to rename this document in Test Plan and Report.

DM considers the possibility to use the Test Management for Jira (ATM by Adaptivist). The main advantage is to have an easy way to manage test cases and test executions and to relate them to requirements.

The documentation can be extracted from Jira:

  • The test specification will contain the test cases defined to a specific DM component.
  • The Test Report: will be divided into 2 parts:
    • the first part (the plan) contains the planned test activity information (first list of tests to be executed), environments, datasets, personnel.
    • the second part (the report) contains the actual test runs results, their assessment and additional information derived from the execution of the test

Test Specification

The test specification is the document where all test cases defined for a specific component in the Document Tree are baselined. The obsolete test cases will be included also.

The test cases have to be written in JIRA, in the LVV project, under the corresponding folder.

Sections 1 and 2 of the test specification need to be written directly using Latex and changes submitted in the corresponding GitHub repository. Section 3, 4 and appendix A will be generated from JIRA. Additional appendix sections can be added by the author.

How to write a test case

A test case needs to be formulated in a general way, so that it can be executed multiple times in different conditions, like for example, different software versions or different dataset versions.

To fully characterize a Test Case, 3 sections need to complete:

  • the test details in the main tab of the Test Case
  • the steps in the Test Script tab of the Test Case
  • the traceability to the Verification Elements (requirements) in the Traceability tab of the Test Case

Test Script

The Test Script consists of a sequence of steps to be executed one after the other.

Each Step has 3 parts:

  • the Description: describe what the step is supposed to do
  • the Test Data: describe which input data is required for the test step
  • the expected result from the step.

These are the rules to use to write steps in the  Test Script tab:

  • Write each step description in the most reproducible way
    • Ideally, it should be possible to copy and paste the commands. In the case of graphic interfaces, each action shall be clearly described.
    • Avoid the reference to specific SW versions or dataset: the test case has to be as general as possible.
  • In the Test Data field, add the input data to use as input (if any)
  • Specify the expected result:
    • this field will be included in the test report, but the step description will not. Ensure that the Expected Result is meaningful and self-explanatory.
    • pay attention to the formatting, the document generated may require some adjustments to the text added in this field
  • When a different test case is used in a step:
    • use only test cases that have been created for that purpose in the same folder (test spec). Test cases to be included in other test cases, should not need to be traced to parent requirements.
    • avoid a recursive use of test case: only one level permitted.

At the moment (August 2018) plain text test scripts will not be included in the test specification.

Input data and parameters are not taken into account also when generating the test specification nor the test report. Future versions of the document generation script may be able to handle this information.


In the Traceability tab, one or more Verification Elements need to be linked.

Verification Elements are defined in the model in MagicDraw and synchronized in Jira using the Synedia plugin. Each Verification Element is related to a requirement in the model.

Verification Elements will be included in the test specification for each test case.

Links to confluence pages or other web pages can be added to this section.

Test Case Details

Follows a table with some recommendations for each test case field in the main tab:

Field NameOld Test Spec NameHow to fill itAdditional Comments
NameTest case subsection titleHas to be a short string indicating the purpose of the test.

It is recommended to avoid using the requirement name and ID. LDM-639 test cases have been named following the verification elements, and therefore the requirements names, mainly for lack of time.

Note that the Test Design is not used anymore and there is no need to have at the beginning of the name an identifier following the previous naming. Old test cases will keep it only for backward compatibility.

ObjectiveTest Items

Describe the scope of the test.

The requirement to be implemented can be used as a reference, describe what you are going to test. It may be only a part of the requirement.

In some cases, milestones defined in the test plan LDM-503 may drive the scope of the test.

This field will appear in the Test Specification as "Test Items". Avoid including the requirements text.
PreconditionInput SpecificationThis field should include all inputs required to run the test.

This field will appear in the Test Specification as "Input Specifications".

For example, the input data required (do not specify versions).

The field is provided by the test management plugin in JIRA and can't be renamed.


A Test Specification document will be produced for each folder, including all test cases defined in it.

When you create a new test case the status is set to "Draft".

When the test case is ready to be submitted for approval, set the status to "Defined". The test cases will not be approved one by one but will be submitted to the CCB for approval (via RFC) within a new version of the test specification.

Once the Test Specification is approved the status of the "Defined" test cases will be set to "Approved" and the new issue of the document uploaded in Docushare.

Once a test case is not valid anymore, its status has to be set to "Deprecated"

When you are going to modify approved test cases:

  • create a new version
  • set status to "Draft"

Do not delete test cases, this may remove important information from the system.

The Jira Test Management plugin is not going to enforce a new version when an approved test case is modified.

Set the priority that you think is more appropriate. By default is set to "Normal".

In the future, this information can be used to prioritize test cases.

At the moment is not used.

DM (Data Management)This field may be useful when filtering across test cases.
Who is in charge to write and maintain the test case.In the future, the test case may be executed by a different person, but the owner will not change.
Estimated Time
Fill it in case you have an idea how much time it can take to run the test case, otherwise, let it blanc
Verification Type

It can be "Test", "Inspection", "Demonstration" or "Analysis".

Use the most appropriate type. In general, this should be a "Test".

This information should be derived by the requirement and Verification Element definition.
Verification Configuration

Not required.

It can be used in case a test case shall test a specific configuration of a software product, but this is against the principle that a Test Case should be generally formulated. Specific configurations will be specified during the test run definition.
PredecessorIntercase DependenciesThis is the list of test cases that need to be completed (successfully) before the actual test case can be executed.This does not imply that those test cases are part of the test script. Usually, they are not.
Critical Event

This field is mandatory but should not be relevant for DM. Therefore set it to "False".

Associated Risks
This field should not be relevant to DM. Leave it blanc.
Unit under test
This field should not be relevant to DM. Leave it blanc.In the future, we may use the components defined in the model in Magic Draw to group test cases, instead of using the folder.
Required SoftwareEnvironment Needs - SoftwareList here the required software packages that are needed to be installed in the system to run the testIn case the test case is to verify the functionalities of a specific DM software package, for example, science_pipeline, this shall NOT be listed here, but in the Objective field, Test items section.
Test EquipmentEnvironment Needs - Hardware

List here the required hardware that is needed to be installed in the system to run the test.

This usually implies a server with a specific CPU power, RAM and available disk space.

The exact hardware used for the test will be specified in the Test Plan or the Test Run. This information can be different each time a test case is executed.
Test Personel
This field has not been used until now by DM during the definition of the test. Leave it blanc or list here external people that may need to be involved during the verificationThese people are not the owner nor the test engineer that is going to run the test. They may be for example stakeholder, that need to assess the scientific results to ensure that the test has passed.
Safety Hazards
This field should not be relevant to DM. Leave it blanc.
Required PPE
This field should not be relevant to DM. Leave it blanc.
PostconditionsOutput SpecificationsSpecify here what output is expected from the test.

For example, the output data expected.

This field has been called "Postcondition" due to the duality with the "Precondition" field provided by default.

Test Plan and Report

With the introduction of the Jira Test Management approach, two specific objects are foreseen to guide the test activities planned in a specific moment in time (test campaign):

  • Test Plan: gives information on the planning of the test campaign, following an agreed schedule. The main source for the schedule for DM is LDM-503, but additional test campaigns can be organized ad hoc.
  • Test Run: provides the information on which test cases have to be executed in a specific context. Multiple Test Runs can be associated with a Test Plan.

Before a test activity can start:

  • all relevant information in the Test Plan and Test Report have to be provided in Jira;
  • a first Test Plan and Report (draft) has to be generated from Jira, corresponding source LaTeX files uploaded in GitHub and published via
  • the corresponding responsible (T/CAM, product scientist) has to review the draft document and set the Test Plan status to "Accepted" if everything is fine, or ask the Test Plan and Test Report owners for improvements if needed.

The entire document is generated from Jira, except for the history table and curator information. Appendix sections can be added by the author.

Defining the Test Campaign

To start defining a test campaign, a corresponding Test Plan (LVV-PXXX) has to be created in Jira. Test Plans are organized in folders in the same way as Test Cases are.

Test Plan Details

In the Test Plan Details, the main tab of the Test Plan object in Jira, the following information needs to be provided before the test campaign starts.

Field NameIdentification in the old Test Report templateHow to fill itAdditional Comments
Short identification of the test activityThis field will be used as the title of the "Test Plan and Report document"
ObjectiveObjective / ScopeDescribe the Objective and Scope of the test campaign

The first part of the field will be included in the Test Report / Objective subsection.

Add a bold "Scope" string followed by the corresponding text. If no scope is provided, the Test Report / Scope subsection will be let empty.


this information is not used to generate documents, but it helps navigate and find information
  • Draft: when a test campaign is proposed or under the definition
  • Approved: when a test campaign is fully defined, including test runs information, and it has been formally approved via RFC
  • Completed: when all the test runs associated with the test plan have been completed and all information is ready to be exported in a document
  • Deprecated: when an originally planned test campaign is removed from the planning

The responsible for the test campaign planning
Verification EnvironmentTest Configuration - HardwareDescribe the environment where the test is going to be executed, including hardware and low-level software like for example operating system.
Entry Criteria
Not relevant for DM
Exit Criteria
Not relevant for DM
PMCS Activity
Not relevant for DM
Observing Required
Not relevant for DM, except in case of integration test with Camera
System OverviewSystem OverviewDescription of the system under test focused on the actual test campaign (System Overview) and Applicable Documents for this specific test campaign.

The first part of the field will be included in the Test Report / System Overview.

Add a bold "Applicable Documents" string followed by the corresponding text. If no Applicable Documents text is provided, the Test Report / Applicable Documents subsection will be let empty.

Verification Artifact
From SE Test Management Architecture, this field should contain web links to resulting data product(s) including the Test Reports, Analysis, etc.To be completed when the Test Plan and Report document is available.
Overall AssessmentOverall Assessment

Provide first a statement that the test campaign was successful or not.

The add a short text where the above statement is justified.

To be completed after all test cases have been executed.
Recommended ImprovementsRecommended Improvements

Provide improvements and suggestions as an outcome of the test campaign.

To be completed after all test cases have been executed.

Test Plan Traceability

The Traceability tab in the Test Plan shall link at least to one specific Test Run.

Additional information has to be provided here, if relevant:

  • issues related to the test campaign
  • confluence page (usually not relevant for DM)
  • web-links (usually not relevant for DM)

Test Cycle Information required for the test campaign definition

This Jira ATM object was called Test Run in the previous Jira version. All references to Test Run on this page shall be considered references to Test Cycles.

To complete the definition of the test campaign, some information needs to be provided in the Test Run(s) to be executed. 

Each Test Run can be related only to one Test Plan.

Each Test Case can be included only once in a Test Run (limitation of the Test Management plugin). If a Test Case has to be executed in two different conditions during a test campaign, two Test Runs need to be defined and associated with the same Test Plan.

Following fields are required to be filled in each Test Run associated with the Test Plan, before the Test Plan is approved:

Field NameIdentification in the old Test Report templateHow to fill itAdditional Comments
Short identification of the Test RunIn case of a single test run associated with a test campaign, this can be the same as the Test Plan name. In the case of multiple Test Runs, they have to identify the conditions that each of them describes, like for example test environment, datasets, configuration, etc
Short description of the Test RunIf the case, clarify what differentiate different Test Runs associated with the same Test Plan.

this information is not used to generate documents, but it helps navigate and find information
  • Not Executed: when the Test Run is under definition or has still not been executed
  • In Progress: when the Test Run is being executed
  • Done: All steps of all Test Cases have been executed

Not relevant for DM
Not Relevant for DM
The responsible for the test run
Planned Start Date
The date when the test execution should startLeave it blank if this information is not available
Planned End Date
The date when the test execution should endLeave it blank if this information is not available
Software Version / BaselineTest Configuration - SoftwareList the software required to start a test activityNote that, if the objective of the Test Plan is to test a specific version of a software product, this has NOT to be listed in this custom field.
ConfigurationTest Configuration

Configuration to be used.

Input Data

The section in the Test Plan and Report "Test Configuration" will be filled using the contest of this field for the subsection regarding Input Data and Configuration to be used.

The other subsections, Hardware, Software, and Personnel, will be derived from another custom field available in the system already explained before, or from information available in different objects.

Executing the Test Campaign

Once the Test Plan has been approved by the corresponding response as described above, the test campaign can start.

For each Test Run, the tester has to run each Test Case Step and add in the corresponding field the obtained result (Actual Result).

To keep in mind that the content of the field "Actual Result" will be included in the test report. Please ensure that its phrasing is consistent and self-explanatory and properly formatted.

During the execution of a test case, possible Step statuses are:

  • IN PROGRESS: the step is being executed
  • PASS: the step has been completed successfully execute, as it can be evinced from the "Actual Result" field
  • CONDITIONAL PASS: the step execution has been completed, but some minor problems have been experienced. A corresponding issue describing the problem shall be opened. That issue shall not have priority "Blocker" nor "Critical"
  • FAIL: the execution revealed a problem. A corresponding issue describing the problem shall be opened. That issue shall have priority "Blocker" or "Critical"
  • BLOCKED: the step can not be completed due to an existing problem on the preconditions. An issue should be available describing that problem.

You can use the time recording function to trace the execution time of each test case.

Issues shall be linked in the corresponding section in the test player and mentioned in the "Actual Result" field.

Once all steps in a test case have been completed, a comment shall be added, summarizing the test result. This comment will be reported in the "Test Plan and Report" summary table and will be the most visible information for each test case executed in the test campaign.

Once all Test Runs associated with a Test Plan have been completed, the following activity will be performed to close the test campaign:

  • complete in the test plan the field "Overall Assessment" as follows:
    • Add a first statement saying if the test campaign has been successfully concluded or not.
    • Add a short text (max 20 lines) explaining the reasons for the above statement.
  • add suggestions in the "Recommended Improvements" field, if any, or leave it blank
  • generate from Jira the second part of the "Test Plan and Report" document (sections 4 and 5), 
  • the corresponding manager reviewed the document, ensure that all information in it is meaningful and text correctly formatted
  • upload the document in Docushare
  • Set the Test Plan status to "Completed".

Text Formatting

(to be confirmed)

The Jira Test Management plugin provides at the moment only limited text formatting capabilities. Besides, we need to fit the information provided in Jira in a readable document.

Follows the rules that need to be followed to have as much as possible well-formatted and readable test documentation produced automatically from Jira.

In case the formatting is broken, the text needs to be fixed in Jira and NEVER in the tex document in Github.

  1. General Multiline fields fill be converted into subsections in the corresponding Jira Document. 
    1. The name of the field will become the name of the subsection.
      1. an exception is for few fields, where a different name is expected to appear in the document, as explained in this table
    2. The formatting will be preserved as much as possible in the generated document
    3. In case a more structured text is needed to be provided in a field, each bold text will identify a subsection.
    4. Table or images will not be exported (for the moment)
  2. Step multiline fields will be displayed in a mini page latex environment and may not exceed the single page. Please ensure that the text included in those fields is not too long.


Test Specification Approval

  • Create / Update Test cases in Jira
    • test case status shall be set to Draft when an Approved test case is updated
  • Test Specification updated via Travis (continuously or at regular intervals)
  • Submit RFC for Test Spec approval
  • Once RFC approved:
    • Test Cases status to be set to Approved,
    • Test Spec to be regenerated with approved test cases and uploaded in Docushare

Test Plan and Report Approval

  • Create in Jira a Test Plan
  • Complete the Test Plan with all relevant information required before the test activity can start (see procedure above)
  • Create a Test Run and trace it to the Test Plan just created, multiple Test Runs can be created for the same Test Plan.
  • Complete the Test Run(s) with all relevant information required before the test activity can start (see procedure above)
  • The document "Test Plan and Report" is generated (draft) from Jira (can be generated continuously)
  • Formal Jira Test Plan approval (Test Readiness Review) before test activity starts:
    • Target: 
      • Ensure that everything needed for the test is available and has been properly documented
      • Request changes to the Jira Test Plan and Test Run(s) if needed
      • Set status of Jira Test Plan to "Approved"
    • Who is involved
      • In case of a small test campaign regarding a single product, the product leader is in charge to review the document ad set to "Approved" the Jira Test Plan. Stakeholders can be involved, but the objective is to keep the process as smooth as possible.
      • In case of a big integration test campaign that involves many products and may extend outside DM:
        • The draft "Test Plan and Report" should be sent to all responsible of involved systems and stakeholders  
        • Feedbacks be collected by the responsible of the test campaign, the Jira Test Plan owner and corrections and updated be done in Jira Test Plan and Test Run(s)
        • In case of needs, a meeting should be organized to discuss and sort out all open points
        • Once all problems have been sorted out, the Jira Test Plan owner can set the status to "Approved" and test activity can start.
        • A draft version of the Test Plan and Report can be uploaded in Docushare (not preferred)
  • Run the test using the Jira Test Run Player
    • Report result for each Test Step
    • Issues opened during the test execution shall be related to the Test Case in the Test Run Player
    • Assess the result of global execution of each Test Case as specified above
  • Once all Test Cases included in the Test Run(s) have been executed, Complete the field "Summary Overview" in the Jira Test Plan and set the status to "Completed"
  • The Test Plan and Report generated continuously from Jira will include all execution information and 
  • It should be ready for the final approval and issue in Docushare:
    • The test report should be explicitly approved by whoever requested that the test activity be carried out.

Documents Generation from Jira

The procedure to generate a document from Jira is in principle the same as the user used for generating requirements documents from MagicDraw.

  • Open a Jira issue in DM project (who: the person requesting the test)
  • Add content to Jira using test cases, test plans and test cycles (who: the Jira issue assignee)
  • Create a ticket branch in the Github repository of the document to update (who: the Configuration Release Manager)
  • Auto-generate the document from Jira using the tool docsteady (automatic, in a first time the CRManager)
    • The autogenerated parts of the document will include all changes implemented in the Jira ATM objects (test plan, test cases, etc). Never edit directly in GitHub, the parts of the documents that are auto-generated from Jira.
    • See the above Test Specification and Test Plan and Report descriptions for the detailed list of autogenerated sections.
    • Edit the parts of the document that are not generated from Jira, directly in the branch.
  • Once all the editing are done, a pull request is created and the Jira issue status is set to "In Review"
  • Reviewers will comment and request changes using the pull request
  • If comments are accepted, edits need to be done in Jira. If a comment affects a part that is not auto-generated, the changes will be done directly in the ticket branch. Some changes may affect the auto-generation script, docsteady.
  • Once the PR is approved, the ticket branch is merged into main.

The approval of the documents follows the usual RFC process, including the RFC specific branch.

  • No labels


  1. Regarding "who signs off on test reports": part of the question is who is reporting to whom.  If it's the tester(s) reporting to DM Management, that's one thing.  If it's ultimately DM speaking to the PMO and agencies regarding our meeting our requirements and milestones, then a DM Management signoff is appropriate.

    1. One should also delegate down to the correct level - even if this is for NSF the product owner sign off would seem the approriate level.

  2. Responding to the DMLT action of 2018-08-06 to provide comments on the procedure for accepting test reports:

    My immediate question here is: what is the scope of the tests to which the procedure defined above will be applied?  To date, we have only issued test reports for LDM-503 (ie, level 2) milestones. To first order, all of these milestones fall under the heading of “big integration tests”; there are no “small test campaigns”. Given that the text above allows for “small” tests — and because it seems like an obvious extension of our current procedures — I wonder if the plan is ultimately to extend this procedure to verifying L3 & L4 milestones? In any event, it would be helpful to clarify the text above to describe exactly which tests this procedure is supposed to apply to.

    For now, I'll assume that this material applies only to the LDM-503 level 2 milestones. In that case, I propose that test reports should be explicitly accepted by both the DM Project Manager and the Subsystem Scientist or their deputies.


    • There are only O(20) of these tests; averaged over the next several years, this does not impose a significant workload.
    • These tests are our primary mechanism for tracking and demonstrating progress on DM construction. It is incumbent upon Project Management to be both directly responsible for that, and to be in a position to defend it to the wider project, the agencies, the general public, etc when called upon to do so.
    • These tests are our primary mechanism of verifying that we are meeting the requirements placed upon the DM system. Since managing those requirements is the primary responsibility of the Subsystem Scientist, it is incumbent upon them to verify that tests are being executed correctly and that requirements are being properly verified.

    Note that it's never appropriate for a tester to approve their own report: that's akin to checking your own homework, or reviewing your own code.

    In the event that we did adopt a procedure like the above for L3 & L4 milestones, then it would be appropriate for the product owner of that milestone to act as the approver of the report, unless they were also responsible for carrying out the test being reported upon.

    1. Yes, it was the Level 2 milestones I was thinking about, and I agree that there can be a delegated procedure for lower-level ones.

    2. Unknown User (gcomoretto)

      The proposed procedure is made in order to avoid extra formalities, in a context that can become very frenetic in the future (from my Gaia experience). Executing on the fly of a test procedure (for whatever reason) should be possible, and should not require lots of formalities.

      In any case, the procedure has to be general.

      We can' t specify in the procedure which tipe of  approval has to be applied to each milestone, since the milestones can change, and the procedure can be applied to test activities that are not defined in this milestones. In that case when we should define in the milestone which approval procedure has to be used.

      My understanding was that we have an outstanding test report (DMTR-091) that need to be approved. And this  do not look like a "big integration test", only 3 test cases are documented there. But I may be wrong.

      From the one side, having the test reports always approved via RFC, simplify the approach, and make it easy to trace back on what happened in a specific test campaign.

      However, it may be useful to have a shortcut, that permit to approve an test report in a much quicker way. This should be identified case by case, in the moment you define the test campaign or test milestone in LDM-503.

      1. I think the above just emphasizes my earlier point: we need to be clear about which activities this procedure applies to.

        My contention is that all level 2 milestone test reports must be approved by the DMPM & DMSS. Other, lesser, tests may be delegated to others, or may not need test reports, or whatever.

        Since there are only O(20) L2 milestones, I don't think this can become “very frenetic” (at least, not for more than short periods at a time). If it is frenetic, that's even more reason for the DMPM and DMSS to be explicitly involved: it'll be harder, but even more essential, for them to be able to answer in detail for the state of the project during high-stress periods.

        If we execute many, smaller, tests, then I agree that the considerations change. Until that's been properly specified, then I don't think you can expect a meaningful answer to “what's the procedure for accepting test reports?” — the question simply isn't well enough defined.

        (DMTR-91, by the way, documents the major integration event between the Data Facility and the Alert Distribution system. It's about as much of a “big integration test” as I can imagine...)

        1. Unknown User (gcomoretto)

          OK, each test milestone has to define which approval procedure  has to be applied.

          And the procedure has to be generic, and not tailored only around L2 milestone.

          1. So your statement is that we're applying this procedure to all of our milestones, not just L2 — is that correct?

            1. Unknown User (gcomoretto)


              The action text says: 

                  Define the process for approving test reports.

              1. (I wrote a bunch of text, which I leave below for reference, but let me preface it with the TL;DR which gets right to the point):

                Before we can define the process for approving test reports, you need to define the circumstances under which test reports will be issued.

                Original text:

                So I've been thinking more about this, and I think we need some expectation level setting.

                To date, we have issued test reports for:

                  • The completion of Level 2 milestones;
                  • Characterization reports for Science Pipelines releases;
                  • Ad-hoc documents reporting on something. Often these were historical — i.e., they predate the concept of the test report as introduced by Wil (e.g. DMTR-22, DMTR-12, etc).

                Of these three categories, test reports on the completion of L2 milestones obviously map onto the verification approach being described on this page, so that's what I've focused my comments on to date.

                The other two categories have not been tied to requirements verification in any formal way: they clearly report on things that are relevant to the overall operation of the DM system, but that's as far as it goes.

                So my contention is that the contents of a test report is not well defined, and certainly is not (in general) coupled to any of the Jira-based operations described above. As such, I believe that the question as posed — “define the process for approving test reports” — is ill-defined and effectively unanswerable.

                I suggest that limiting the scope of this question to “define the process for accepting test reports describing the completion of L2 milestones” is more meaningful.

                1. Unknown User (gcomoretto)

                  I agree with you that we have an issue on the Test Report definition.

                  Clearly, the second category is not a test report, but a report on quality parameters, that characterize a release, and they are not related to the verification approach.

                  I would suggest to keep the procedure focused on proper Test Reports, intended to be a reports on test activities that imply the execution of documented test cases.

                  This may coincide 99% of the times with L2 milestones, but I would like to let open the possibility to have test activity that that are not formal L2 milestones.

                  1. Thanks for clarifying — I'm happy with that question!

                    I suggest the following answer:

                    The test report should be explicitly approved by whoever requested that the test be carried out.

                    In the case of L2 milestones, that request is issued by the DMPM and DMSS by way of LDM-503, so they should approve the resulting report. For other test executions, it will vary on a case-by-case basis.

                    In terms of mechanism, I suggest that an RFC or similar is not necessary: it's fine for the approver to simply indicate their approval with a comment on the Jira ticket which captures the work of organizing and/or carrying out the test.

                    1. So perhaps that ticket just gets assigned for review to the requestor, then.  That works in the existing DM-ticket workflow.

                    2. Unknown User (gcomoretto)

                      OK, I will update the text, and fix few minor inconsistencies.