Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ Product Types can have Role\-Based Access Control rules applied, which limit tea

#### What can a Product Type represent?

* If a particular software project has many distinct deployments or versions, it may be worth creating a single Product Type which covers the scope of the entire project, and having each version exist as individual Products.
* If a particular software project has many distinct deployments or versions, it may be worth creating a single Product Type which covers the scope of the entire project, and having each version exist as individual Products.
* You also might consider using Product Types to represent stages in your software development process: one Product Type for 'In Development', one Product Type for 'In Production', etc.
* You also might consider using Product Types to represent stages in your software development process: one Product Type for 'In Development', one Product Type for 'In Production', etc.
* Ultimately, it's your decision how you wish to organize your Products, and what you Product Type to represent. Your DefectDojo hierarchy may need to change to fit your security teams' needs.

Expand Down Expand Up @@ -58,11 +58,11 @@ The following scenarios are good reasons to consider creating a separate DefectD
* "**ExampleProduct 1\.0**" uses completely different software components from "**ExampleProduct 2\.0**", and both versions are actively supported by your company.
* The team assigned to work on "**ExampleProduct version A**" is different than the product team assigned to work on "**ExampleProduct version B**", and needs to have different security permissions assigned as a result.

These variations within a single Product can also be handled at the Engagement level. Note that Engagements don't have access control in the way Products and Product Types do.
These variations within a single Product can also be handled at the Engagement level. Note that Engagements don't have access control in the way Products and Product Types do.

## **Engagements**

Once a Product is set up, you can begin creating and scheduling Engagements. Engagements are meant to represent moments in time when testing is taking place, and contain one or more **Tests**.
Once a Product is set up, you can begin creating and scheduling Engagements. Engagements are meant to represent moments in time when testing is taking place, and contain one or more **Tests**.

Engagements always have:

Expand All @@ -72,12 +72,12 @@ Engagements always have:
* an assigned **Testing Lead**
* an associated **Product**

There are two types of Engagement: **Interactive** and **CI/CD**.
There are two types of Engagement: **Interactive** and **CI/CD**.

* An **Interactive Engagement** is typically run by an engineer. Interactive Engagements are focused on testing the application while the app is running, using an automated test, human tester, or any activity “interacting” with the application functionality. See [OWASP's definition of IAST](https://owasp.org/www-project-devsecops-guideline/latest/02c-Interactive-Application-Security-Testing#:~:text=Interactive%20Application%20Security%20Testing,interacting%E2%80%9D%20with%20the%20application%20functionality.).
* A **CI/CD Engagement** is for automated integration with a CI/CD pipeline. CI/CD Engagements are meant to import data as an automated action, triggered by a step in the release process.

Engagements can be tracked using DefectDojo's **Calendar** view.
Engagements can be tracked using DefectDojo's **Calendar** view.

#### What can an Engagement represent?

Expand All @@ -91,7 +91,7 @@ If you have a planned testing effort scheduled, an Engagement offers you a place

* **Test:** Nessus Scan Results (March 12\)
* **Test:** NPM Scan Audit Results (March 12\)
* **Test:** Snyk Scan Results (March 12\)
* **Test:** Snyk Scan Results (March 12\)
You can also organize CI/CD Test results within an Engagement. These kinds of Engagements are 'Open\-Ended' meaning that they don't have a date, and will instead add additional data each time the associated CI/CD actions are run.

Expand Down Expand Up @@ -137,6 +137,29 @@ The following Test Types appear in the "Scan Type" dropdown when creating a new

Non-parser Test Types should be used when you need to manually create findings that require remediation but don't originate from automated scanner output.

#### **Parser-based Test Types**

Parser-based test types can be categorized by how their test type name is determined:

- **Fixed Test Type Names**: The test type name is predefined and known before import (e.g., "ZAP Scan", "Nessus Scan").

- **Report-Defined Test Type Names**: The test type name is extracted from the scan report content at import time.

Examples include:
- **Generic Findings Import**: Creates test types based on the `type` field in JSON reports
- **SARIF**: Creates test types based on tool names in the SARIF report (e.g., "Dockle Scan (SARIF)")
- **OpenReports**: Creates separate test types per source found in the report

**Report-Defined Test Type Naming Rules:**
- If the report's `type` field equals the scan type → uses scan type directly (e.g., "Generic Findings Import")
- If the report's `type` field differs → creates "{type} Scan ({scan_type})" format (e.g., "Tool1 Scan (Generic Findings Import)")
- If no `type` field is provided → uses scan type directly

**Important Considerations:**
- Report-defined test types are automatically created when a new type is detected during import or reimport.
- For reimports, the test type name must match exactly - mismatches will raise a validation error
- Deduplication settings (`HASHCODE_FIELDS_PER_SCANNER`) use test type names as keys, so report-defined names must be configured accordingly if you want custom deduplication behavior

#### **How do Tests interact with each other?**

Tests take your testing data and group it into Findings. Generally, security teams will be running the same testing effort repeatedly, and Tests in DefectDojo allow you to handle this process in an elegant way.
Expand Down
30 changes: 27 additions & 3 deletions dojo/importers/base_importer.py
Original file line number Diff line number Diff line change
Expand Up @@ -205,10 +205,34 @@ def consolidate_dynamic_tests(self, tests: list[Test]) -> list[Finding]:
if not self.test:
# Determine if we should use a custom test type name
if test_raw.type:
test_type_name = f"{tests[0].type} Scan"
if test_type_name != self.scan_type:
test_type_name = f"{test_type_name} ({self.scan_type})"
# If test_raw.type equals scan_type, use scan_type directly
if test_raw.type == self.scan_type:
test_type_name = self.scan_type
else:
test_type_name = f"{tests[0].type} Scan"
if test_type_name != self.scan_type:
test_type_name = f"{test_type_name} ({self.scan_type})"
self.test = self.create_test(test_type_name)
else:
# During reimport, validate that the test_type matches
# Calculate the expected test_type_name from the incoming report
expected_test_type_name = self.scan_type
if test_raw.type:
# If test_raw.type equals scan_type, use scan_type directly
if test_raw.type == self.scan_type:
expected_test_type_name = self.scan_type
else:
expected_test_type_name = f"{test_raw.type} Scan"
if expected_test_type_name != self.scan_type:
expected_test_type_name = f"{expected_test_type_name} ({self.scan_type})"
# Compare with existing test's test_type name
if self.test.test_type.name != expected_test_type_name:
msg = (
f"Test type mismatch: Test {self.test.id} has test_type '{self.test.test_type.name}', "
f"but the report contains test_type '{expected_test_type_name}'. "
f"Reimport with matching test_type or create a new test."
)
raise ValidationError(msg)
# This part change the name of the Test
# we get it from the data of the parser
# Update the test and test type with meta from the raw test
Expand Down
13 changes: 13 additions & 0 deletions unittests/scans/generic/generic_no_type.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
"name": "Test Without Type",
"findings": [
{
"title": "Test Finding Without Type",
"description": "This is a test finding without type field",
"severity": "Medium",
"active": true,
"verified": true
}
]
}

14 changes: 14 additions & 0 deletions unittests/scans/generic/generic_test_type_1.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"name": "Test Tool1",
"type": "Tool1",
"findings": [
{
"title": "Test Finding 1",
"description": "This is a test finding for Tool1",
"severity": "High",
"active": true,
"verified": true
}
]
}

14 changes: 14 additions & 0 deletions unittests/scans/generic/generic_test_type_2.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"name": "Test Tool2",
"type": "Tool2",
"findings": [
{
"title": "Test Finding 2",
"description": "This is a test finding for Tool2",
"severity": "Medium",
"active": true,
"verified": true
}
]
}

14 changes: 14 additions & 0 deletions unittests/scans/generic/generic_test_type_equals_scan_type.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"name": "Test With Type Equal To Scan Type",
"type": "Generic Findings Import",
"findings": [
{
"title": "Test Finding With Type Equal To Scan Type",
"description": "This is a test finding with type equal to scan_type",
"severity": "High",
"active": true,
"verified": true
}
]
}

Loading