Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
16 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 0 additions & 7 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,10 +64,3 @@ updates:
versions:
- ">= 4.a"
- "< 5"
- package-ecosystem: docker
directory: "/"
schedule:
interval: weekly
open-pull-requests-limit: 10
target-branch: dev

3 changes: 1 addition & 2 deletions .github/renovate.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,7 @@
"components/package.json",
"components/package-lock.json",
"dojo/components/yarn.lock",
"dojo/components/package.json",
"Dockerfile**"
"dojo/components/package.json"
],
"ignoreDeps": [],
"packageRules": [{
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/rest-framework-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ jobs:

# no celery or initializer needed for unit tests
- name: Unit tests
timeout-minutes: 15
timeout-minutes: 20
run: docker compose up --no-deps --exit-code-from uwsgi uwsgi
env:
DJANGO_VERSION: ${{ matrix.os }}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ Product Types can have Role\-Based Access Control rules applied, which limit tea

#### What can a Product Type represent?

* If a particular software project has many distinct deployments or versions, it may be worth creating a single Product Type which covers the scope of the entire project, and having each version exist as individual Products.
* If a particular software project has many distinct deployments or versions, it may be worth creating a single Product Type which covers the scope of the entire project, and having each version exist as individual Products.
* You also might consider using Product Types to represent stages in your software development process: one Product Type for 'In Development', one Product Type for 'In Production', etc.
* You also might consider using Product Types to represent stages in your software development process: one Product Type for 'In Development', one Product Type for 'In Production', etc.
* Ultimately, it's your decision how you wish to organize your Products, and what you Product Type to represent. Your DefectDojo hierarchy may need to change to fit your security teams' needs.

Expand Down Expand Up @@ -58,11 +58,11 @@ The following scenarios are good reasons to consider creating a separate DefectD
* "**ExampleProduct 1\.0**" uses completely different software components from "**ExampleProduct 2\.0**", and both versions are actively supported by your company.
* The team assigned to work on "**ExampleProduct version A**" is different than the product team assigned to work on "**ExampleProduct version B**", and needs to have different security permissions assigned as a result.

These variations within a single Product can also be handled at the Engagement level. Note that Engagements don't have access control in the way Products and Product Types do.
These variations within a single Product can also be handled at the Engagement level. Note that Engagements don't have access control in the way Products and Product Types do.

## **Engagements**

Once a Product is set up, you can begin creating and scheduling Engagements. Engagements are meant to represent moments in time when testing is taking place, and contain one or more **Tests**.
Once a Product is set up, you can begin creating and scheduling Engagements. Engagements are meant to represent moments in time when testing is taking place, and contain one or more **Tests**.

Engagements always have:

Expand All @@ -72,12 +72,12 @@ Engagements always have:
* an assigned **Testing Lead**
* an associated **Product**

There are two types of Engagement: **Interactive** and **CI/CD**.
There are two types of Engagement: **Interactive** and **CI/CD**.

* An **Interactive Engagement** is typically run by an engineer. Interactive Engagements are focused on testing the application while the app is running, using an automated test, human tester, or any activity “interacting” with the application functionality. See [OWASP's definition of IAST](https://owasp.org/www-project-devsecops-guideline/latest/02c-Interactive-Application-Security-Testing#:~:text=Interactive%20Application%20Security%20Testing,interacting%E2%80%9D%20with%20the%20application%20functionality.).
* A **CI/CD Engagement** is for automated integration with a CI/CD pipeline. CI/CD Engagements are meant to import data as an automated action, triggered by a step in the release process.

Engagements can be tracked using DefectDojo's **Calendar** view.
Engagements can be tracked using DefectDojo's **Calendar** view.

#### What can an Engagement represent?

Expand All @@ -91,7 +91,7 @@ If you have a planned testing effort scheduled, an Engagement offers you a place

* **Test:** Nessus Scan Results (March 12\)
* **Test:** NPM Scan Audit Results (March 12\)
* **Test:** Snyk Scan Results (March 12\)
* **Test:** Snyk Scan Results (March 12\)
You can also organize CI/CD Test results within an Engagement. These kinds of Engagements are 'Open\-Ended' meaning that they don't have a date, and will instead add additional data each time the associated CI/CD actions are run.

Expand Down Expand Up @@ -137,6 +137,29 @@ The following Test Types appear in the "Scan Type" dropdown when creating a new

Non-parser Test Types should be used when you need to manually create findings that require remediation but don't originate from automated scanner output.

#### **Parser-based Test Types**

Parser-based test types can be categorized by how their test type name is determined:

- **Fixed Test Type Names**: The test type name is predefined and known before import (e.g., "ZAP Scan", "Nessus Scan").

- **Report-Defined Test Type Names**: The test type name is extracted from the scan report content at import time.

Examples include:
- **Generic Findings Import**: Creates test types based on the `type` field in JSON reports
- **SARIF**: Creates test types based on tool names in the SARIF report (e.g., "Dockle Scan (SARIF)")
- **OpenReports**: Creates separate test types per source found in the report

**Report-Defined Test Type Naming Rules:**
- If the report's `type` field equals the scan type → uses scan type directly (e.g., "Generic Findings Import")
- If the report's `type` field differs → creates "{type} Scan ({scan_type})" format (e.g., "Tool1 Scan (Generic Findings Import)")
- If no `type` field is provided → uses scan type directly

**Important Considerations:**
- Report-defined test types are automatically created when a new type is detected during import or reimport.
- For reimports, the test type name must match exactly - mismatches will raise a validation error
- Deduplication settings (`HASHCODE_FIELDS_PER_SCANNER`) use test type names as keys, so report-defined names must be configured accordingly if you want custom deduplication behavior

#### **How do Tests interact with each other?**

Tests take your testing data and group it into Findings. Generally, security teams will be running the same testing effort repeatedly, and Tests in DefectDojo allow you to handle this process in an elegant way.
Expand Down
13 changes: 12 additions & 1 deletion dojo/finding/deduplication.py
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,18 @@ def is_deduplication_on_engagement_mismatch(new_finding, to_duplicate_finding):


def get_endpoints_as_url(finding):
return [hyperlink.parse(str(e)) for e in finding.endpoints.all()]
# Fix for https://github.com/DefectDojo/django-DefectDojo/issues/10215
# When endpoints lack a protocol (scheme), str(e) returns a string like "10.20.197.218:6379"
# without the "//" prefix. hyperlink.parse() then misinterprets the hostname as the scheme.
# We replicate the behavior from dojo/endpoint/utils.py line 265: prepend "//" if "://" is missing
# to ensure hyperlink.parse() correctly identifies host, port, and path components.
urls = []
for e in finding.endpoints.all():
endpoint_str = str(e)
if "://" not in endpoint_str:
endpoint_str = "//" + endpoint_str
urls.append(hyperlink.parse(endpoint_str))
return urls


def are_urls_equal(url1, url2, fields):
Expand Down
9 changes: 2 additions & 7 deletions dojo/finding/views.py
Original file line number Diff line number Diff line change
Expand Up @@ -957,8 +957,9 @@ def process_jira_form(self, request: HttpRequest, finding: Finding, context: dic
logger.debug("jform.jira_issue: %s", context["jform"].cleaned_data.get("jira_issue"))
logger.debug(JFORM_PUSH_TO_JIRA_MESSAGE, context["jform"].cleaned_data.get("push_to_jira"))
# can't use helper as when push_all_jira_issues is True, the checkbox gets disabled and is always false
push_to_jira_checkbox = context["jform"].cleaned_data.get("push_to_jira")
push_all_jira_issues = jira_helper.is_push_all_issues(finding)
push_to_jira = push_all_jira_issues or context["jform"].cleaned_data.get("push_to_jira")
push_to_jira = push_all_jira_issues or push_to_jira_checkbox or jira_helper.is_keep_in_sync_with_jira(finding)
logger.debug("push_to_jira: %s", push_to_jira)
logger.debug("push_all_jira_issues: %s", push_all_jira_issues)
logger.debug("has_jira_group_issue: %s", finding.has_jira_group_issue)
Expand All @@ -985,12 +986,6 @@ def process_jira_form(self, request: HttpRequest, finding: Finding, context: dic
jira_helper.finding_link_jira(request, finding, new_jira_issue_key)
jira_message = "Linked a JIRA issue successfully."
# any existing finding should be updated
jira_instance = jira_helper.get_jira_instance(finding)
push_to_jira = (
push_to_jira
and not (push_to_jira and finding.finding_group)
and (finding.has_jira_issue or (jira_instance and jira_instance.finding_jira_sync))
)
# Determine if a message should be added
if jira_message:
messages.add_message(
Expand Down
30 changes: 27 additions & 3 deletions dojo/importers/base_importer.py
Original file line number Diff line number Diff line change
Expand Up @@ -205,10 +205,34 @@ def consolidate_dynamic_tests(self, tests: list[Test]) -> list[Finding]:
if not self.test:
# Determine if we should use a custom test type name
if test_raw.type:
test_type_name = f"{tests[0].type} Scan"
if test_type_name != self.scan_type:
test_type_name = f"{test_type_name} ({self.scan_type})"
# If test_raw.type equals scan_type, use scan_type directly
if test_raw.type == self.scan_type:
test_type_name = self.scan_type
else:
test_type_name = f"{tests[0].type} Scan"
if test_type_name != self.scan_type:
test_type_name = f"{test_type_name} ({self.scan_type})"
self.test = self.create_test(test_type_name)
else:
# During reimport, validate that the test_type matches
# Calculate the expected test_type_name from the incoming report
expected_test_type_name = self.scan_type
if test_raw.type:
# If test_raw.type equals scan_type, use scan_type directly
if test_raw.type == self.scan_type:
expected_test_type_name = self.scan_type
else:
expected_test_type_name = f"{test_raw.type} Scan"
if expected_test_type_name != self.scan_type:
expected_test_type_name = f"{expected_test_type_name} ({self.scan_type})"
# Compare with existing test's test_type name
if self.test.test_type.name != expected_test_type_name:
msg = (
f"Test type mismatch: Test {self.test.id} has test_type '{self.test.test_type.name}', "
f"but the report contains test_type '{expected_test_type_name}'. "
f"Reimport with matching test_type or create a new test."
)
raise ValidationError(msg)
# This part change the name of the Test
# we get it from the data of the parser
# Update the test and test type with meta from the raw test
Expand Down
8 changes: 5 additions & 3 deletions dojo/importers/default_importer.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
Test_Import,
)
from dojo.notifications.helper import create_notification
from dojo.utils import perform_product_grading
from dojo.utils import get_full_url, perform_product_grading
from dojo.validators import clean_tags

logger = logging.getLogger(__name__)
Expand Down Expand Up @@ -370,11 +370,13 @@ def close_old_findings(
old_findings = old_findings.filter(Q(service__isnull=True) | Q(service__exact=""))
# Update the status of the findings and any endpoints
for old_finding in old_findings:
url = str(get_full_url(reverse("view_test", args=(self.test.id,))))
test_title = str(self.test.title)
self.mitigate_finding(
old_finding,
(
"This finding has been automatically closed "
"as it is not present anymore in recent scans."
'This Finding has been automatically closed by the Test: \n "' + test_title + '"\n' + url +
"\n\nThis is because this Finding is not present anymore in recent scans."
),
finding_groups_enabled=self.findings_groups_enabled,
product_grading_option=False,
Expand Down
7 changes: 6 additions & 1 deletion dojo/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -799,7 +799,12 @@ def delete(self, *args, **kwargs):
def copy(self):
copy = copy_model_util(self)
# Add unique modifier to file name
copy.title = f"{self.title} - clone-{str(uuid4())[:8]}"
# Truncate title to ensure it doesn't exceed max_length (100) when appending suffix
# Suffix " - clone-{8 chars}" is 17 characters, so truncate to 83 chars
clone_suffix = f" - clone-{str(uuid4())[:8]}"
max_title_length = 100 - len(clone_suffix)
truncated_title = self.title[:max_title_length] if len(self.title) > max_title_length else self.title
copy.title = f"{truncated_title}{clone_suffix}"
# Create new unique file name
current_url = self.file.url
_, current_full_filename = current_url.rsplit("/", 1)
Expand Down
3 changes: 3 additions & 0 deletions dojo/tools/blackduck_binary_analysis/parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,9 @@ def ingest_findings(self, sorted_findings, test):
finding.fix_available = True
else:
finding.fix_available = False
# Add vulnerability ID for de-duplication
if cve:
finding.unsaved_vulnerability_ids = [str(cve)]
findings[unique_finding_key] = finding

return list(findings.values())
Expand Down
9 changes: 9 additions & 0 deletions dojo/tools/cyclonedx/xml_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,15 @@ def _manage_vulnerability_xml(
"b:ratings/b:rating/b:severity", namespaces=ns,
)
severity = Cyclonedxhelper().fix_severity(severity)
# by the schema, only id is mandatory, even the severity and description are
# optional
if not description:
description = "\n".join(
[
f"**Id:** {vuln_id}",
f"**Severity:** {severity}",
],
)
references = ""
for advisory in vulnerability.findall(
"b:advisories/b:advisory", namespaces=ns,
Expand Down
28 changes: 17 additions & 11 deletions dojo/tools/tenable/csv_format.py
Original file line number Diff line number Diff line change
Expand Up @@ -228,17 +228,23 @@ def get_findings(self, filename: str, test: Test):
LOGGER.debug(
"more than one CPE for a finding. NOT supported by Nessus CSV parser",
)
cpe_decoded = CPE(detected_cpe[0])
find.component_name = (
cpe_decoded.get_product()[0]
if len(cpe_decoded.get_product()) > 0
else None
)
find.component_version = (
cpe_decoded.get_version()[0]
if len(cpe_decoded.get_version()) > 0
else None
)
try:
cpe_decoded = CPE(detected_cpe[0])
find.component_name = (
cpe_decoded.get_product()[0]
if len(cpe_decoded.get_product()) > 0
else None
)
find.component_version = (
cpe_decoded.get_version()[0]
if len(cpe_decoded.get_version()) > 0
else None
)
except Exception as e:
LOGGER.debug(
f"Failed to parse CPE '{detected_cpe[0]}': {e}. "
"Skipping component_name and component_version.",
)

find.unsaved_endpoints = []
find.unsaved_vulnerability_ids = []
Expand Down
6 changes: 2 additions & 4 deletions helm/defectdojo/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ apiVersion: v2
appVersion: "2.54.0-dev"
description: A Helm chart for Kubernetes to install DefectDojo
name: defectdojo
version: 1.9.5-dev
version: 1.9.6-dev
icon: https://defectdojo.com/hubfs/DefectDojo_favicon.png
maintainers:
- name: madchap
Expand Down Expand Up @@ -34,6 +34,4 @@ dependencies:
# description: Critical bug
annotations:
artifacthub.io/prerelease: "true"
artifacthub.io/changes: |
- kind: changed
description: chore(deps)_ update valkey docker tag from 0.10.2 to v0.13.0 (helm/defectdojo/chart.yaml)
artifacthub.io/changes: ""
2 changes: 1 addition & 1 deletion helm/defectdojo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -511,7 +511,7 @@ The HELM schema will be generated for you.

# General information about chart values

![Version: 1.9.5-dev](https://img.shields.io/badge/Version-1.9.5--dev-informational?style=flat-square) ![AppVersion: 2.54.0-dev](https://img.shields.io/badge/AppVersion-2.54.0--dev-informational?style=flat-square)
![Version: 1.9.6-dev](https://img.shields.io/badge/Version-1.9.6--dev-informational?style=flat-square) ![AppVersion: 2.54.0-dev](https://img.shields.io/badge/AppVersion-2.54.0--dev-informational?style=flat-square)

A Helm chart for Kubernetes to install DefectDojo

Expand Down
13 changes: 13 additions & 0 deletions unittests/scans/generic/generic_no_type.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
"name": "Test Without Type",
"findings": [
{
"title": "Test Finding Without Type",
"description": "This is a test finding without type field",
"severity": "Medium",
"active": true,
"verified": true
}
]
}

14 changes: 14 additions & 0 deletions unittests/scans/generic/generic_test_type_1.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"name": "Test Tool1",
"type": "Tool1",
"findings": [
{
"title": "Test Finding 1",
"description": "This is a test finding for Tool1",
"severity": "High",
"active": true,
"verified": true
}
]
}

14 changes: 14 additions & 0 deletions unittests/scans/generic/generic_test_type_2.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"name": "Test Tool2",
"type": "Tool2",
"findings": [
{
"title": "Test Finding 2",
"description": "This is a test finding for Tool2",
"severity": "Medium",
"active": true,
"verified": true
}
]
}

14 changes: 14 additions & 0 deletions unittests/scans/generic/generic_test_type_equals_scan_type.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"name": "Test With Type Equal To Scan Type",
"type": "Generic Findings Import",
"findings": [
{
"title": "Test Finding With Type Equal To Scan Type",
"description": "This is a test finding with type equal to scan_type",
"severity": "High",
"active": true,
"verified": true
}
]
}

Loading