diff --git a/docs/assets/images/priority-overview.png b/docs/assets/images/priority-overview.png
new file mode 100644
index 00000000000..82ea29a71ae
Binary files /dev/null and b/docs/assets/images/priority-overview.png differ
diff --git a/docs/assets/images/priority_dashboard.png b/docs/assets/images/priority_dashboard.png
new file mode 100644
index 00000000000..864c2441626
Binary files /dev/null and b/docs/assets/images/priority_dashboard.png differ
diff --git a/docs/assets/images/priority_edit_product.png b/docs/assets/images/priority_edit_product.png
new file mode 100644
index 00000000000..c3045a3bb8e
Binary files /dev/null and b/docs/assets/images/priority_edit_product.png differ
diff --git a/docs/assets/images/pro_risk_levels.png b/docs/assets/images/pro_risk_levels.png
index 95c8eaa9d4f..9aed4886446 100644
Binary files a/docs/assets/images/pro_risk_levels.png and b/docs/assets/images/pro_risk_levels.png differ
diff --git a/docs/content/en/working_with_findings/finding_priority.md b/docs/content/en/working_with_findings/finding_priority.md
index 224fd7a71b7..646d4e83765 100644
--- a/docs/content/en/working_with_findings/finding_priority.md
+++ b/docs/content/en/working_with_findings/finding_priority.md
@@ -4,70 +4,92 @@ description: "How DefectDojo ranks your Findings"
weight: 1
---
-Additional Finding filters are available in DefectDojo Pro to more easily triage, filter and prioritize Findings.
+
-
+Effective risk-based vulnerability management requires an approach that considers
+both business context and technical exploitability. Using DefectDojo Pro’s Priority and Risk feature, users can automatically sort Findings into a meaningful context, ensuring
+high-impact vulnerabilities can be addressed first.
-* **Priority** sorts Findings based on the context and importance of the Product they are stored in.
-* **Risk** considers the Product's context, with a greater emphasis on the exploitability of a Finding.
+**Priority** is a calculated numerical rank applied to all Findings in your DefectDojo
+instance. It allows you to quickly understand vulnerabilities in context, especially within
+large organizations that are overseeing security needs for many Findings and/or
+Products.
-Learn more about Priority and Risk with DefectDojo Inc's May 2025 Office Hours:
-
+**Risk** is a 4-level ranking system which factors in a Finding’s exploitability to a greater
+degree. This is meant as a less granular, more ’executive-level’ version of Priority.
-## Finding Priority
+
+
+Priority and Risk values can be used with other filters to compare Findings in any context, such as:
-In DefectDojo Pro, Priority is a calculated field on Findings that can be used to sort or filter Findings according to Product-level metadata:
+* within a single Product, Engagement or Test
+* globally in all DefectDojo Products
+* between a few specific Products
-- Product's Business Criticality
-- Whether the Product has an External Audience
-- Whether the Product is Internet Accessible
-- The Product's estimated revenue or user records count
+Applying Finding Priority and Risk helps your team respond to the most relevant
+vulnerabilities in your organization, and also provides a framework to assist in
+compliance with regulatory standards.
-DefectDojo Pro's Finding Priority assigns a numerical rank to each Finding according to this metadata, to provide users with a stronger context on triage and remediation.
-
+Learn more about Priority and Risk with DefectDojo Inc's May 2025 Office Hours:
+
-The range of Priority values is from 0 to 1150. The higher the number, the more urgency the Finding is to triage or remediate.
-Priority numbers can be used with other filters to compare Findings in any context, such as:
+## How Priority & Risk are calculated
+The range of Priority values is from 0 to 1150. The higher the number, the more urgency
+the Finding is to triage or remediate.
-* within a single Product, Engagement or Test
-* globally in all DefectDojo Products
-* between a few specific Products
+Similar to Severity, Risk is scored from Low -> Medium -> Needs Action -> Urgent. **Risk** considers Priority fields and may be different from a tool's reported Severity as a result.
-## How Priority is calculated
+
-Every Active finding will have a Priority calculated. Inactive or Duplicate Findings will not.
+## Priority Fields: Product-Level
-Priority is set based on the following factors:
+Each Product in DefectDojo has metadata that tracks business criticality and risk
+factors. This metadata is used to help calculate Priority and Risk for any associated
+Findings.
-#### Product-Level
+All of these metadata fields can be set on the **Edit Product** form for a given Product.
-- The assigned Criticality for the Product (if defined)
-- The estimated User Records for the Product (if defined)
-- The estimated Revenue for the Product (if defined)
-- If the Product has External Audience defined
-- If the Product has Internet Accessible defined.
+
-All of these metadata fields can be set on the Edit Product form for a given Product.
+* **Criticality** can be set to any value of None, Very Low, Low, Medium, High, or Very
+High. Criticality is a subjective field, so when assigning this field, consider how the
+Product compares to other Products in your organization.
+* **User Records** is a numerical estimation of user records in a database (or a system
+that can access that database).
+* **Revenue** is a numerical estimation of annual revenue for the Product. It is not
+possible to set a currency type in DefectDojo, so make sure that all of your Revenue
+estimations have the same currency denomination. (“50000” could mean $50,000
+US Dollars or ¥50,000 Japanese Yen - the denomination does not matter as long as
+all of your Products have revenue calculated in the same currency).
+* **External Audience** is a true/false value - set this to True if this Product can be
+accessed by an external audience. For example, customers, users, or anyone
+outside of your organization.
+* **Internet Accessible** is a true/false value. If this Product can connect to the open
+internet, you should set this value to True.
-#### Finding-Level
+Priority is a ‘relative’ calculation, which is meant to compare different Products within
+your DefectDojo instance. It is ultimately up to your organization to decide how these
+filters are set. These values should be as accurate as possible, but the primary goal is
+to highlight your key Products so that you can prioritize vulnerabilities according to your
+organization’s policies, so these fields do not necessarily need to be set perfectly.
-- Whether or not the Finding has an [EPSS score](/en/working_with_findings/intro_to_findings/#monitor-current-vulnerabilities-using-cves-and-epss-scores-pro-feature), this is automatically kept up to date for Pro customers
-- How many Endpoints in the Product are affected by this Finding
-- Whether or not a Finding is Under Review
+## Priority Fields: Finding-Level
-If no relevant metadata at the Finding or Product level is set, the Priority level will follow the Severity for a given Finding.
+Findings within a Product can have additional metadata which can further adjust the Finding’s Priority and Risk level:
-- Critical = 90
-- High = 70
-- Medium = 50
-- Low = 30
-- Info = 10
+* Whether or not the Finding has an EPSS score, this is automatically added to Findings and kept up to date for Pro users
+* How many Endpoints in the Product are affected by this Finding
+* Whether or not a Finding is Under Review
+* Whether the Finding is in the KEV (Known Exploited Vulnerabilities) database, which is checked by DefectDojo on a regular basis
+* The tool-reported Severity of a Finding (Info, Low, Medium, High, Critical)
-Currently, Priority calculation and the underlying formula cannot be adjusted. These numbers are meant as a reference only - your team's actual priority for remediation may vary from the DefectDojo calculation.
+Currently, Priority calculation and the underlying formula cannot be adjusted. These
+numbers are meant as a reference only - your team’s actual priority for remediation
+may vary from the DefectDojo calculation.
-## Finding Risk
+## Finding Risk Calculation

@@ -80,3 +102,36 @@ The four assignable Risk levels are:
A Finding's EPSS / exploitability is much more emphasized in the Risk calculation. As a result, a Finding can have both a high priority and a low risk value.
As with Finding Priority, the Risk calculation cannot currently be adjusted.
+
+## Priority Insights Dashboard
+
+Users can take an executive-level view of Priority and Risk in their environment using
+the Priority Insights Dashboard (Metrics > Priority Insights in the sidebar)
+
+
+
+This dashboard can be filtered to include specific Products or date ranges. As with
+other Pro dashboards, this dashboard can be exported from DefectDojo as a PDF to
+quickly produce a report.
+
+## Setting Priority & Risk for Regulatory Compliance
+
+This is a non-exhaustive list of regulatory standards that specifically require
+vulnerability prioritization methods:
+
+* [SOX (Sarbanes-Oxley Act](https://www.sarbanes-oxley-act.com/)) compliance requires revenue-based prioritization for
+systems impacting financial data. In DefectDojo, a system’s revenue can be entered
+at the Product level.
+* [PCI DSS](https://www.pcisecuritystandards.org/standards/pci-dss/) compliance requires prioritization based on risk ratings and criticality to
+cardholder data environments. Business Criticality and External Audience can be
+set at the Product level, while DefectDojo’s Finding-level EPSS sync supports PCI’s
+risk-based approach.
+* [NIST SP 800-40](https://csrc.nist.gov/pubs/sp/800/40/r4/final) is a preventative maintenance guide which specifically calls for
+vulnerability prioritization based on business impact, product criticality and
+internet accessibility factors. All of these can be set at DefectDojo’s Product level.
+* [ISO 27001/27002](https://www.iso.org/standard/27001) Control A.12.6.1 compliance requires management of technical
+vulnerabilities with Priority based on risk assessment.
+* [GDPR Article 32](https://gdpr-info.eu/art-32-gdpr/) requires risk-based security measures - user records and external
+audience flags at the Product level can help prioritize systems in your organization
+that process personal data.
+* [FISMA/FedRAMP](https://help.fedramp.gov/hc/en-us) compliance require continuous monitoring and risk-based vulnerability remediation.
\ No newline at end of file
diff --git a/docs/package-lock.json b/docs/package-lock.json
index 0ddbd7749cb..1390ca2b359 100644
--- a/docs/package-lock.json
+++ b/docs/package-lock.json
@@ -2997,9 +2997,9 @@
}
},
"node_modules/brace-expansion": {
- "version": "1.1.11",
- "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz",
- "integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==",
+ "version": "1.1.12",
+ "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz",
+ "integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==",
"license": "MIT",
"dependencies": {
"balanced-match": "^1.0.0",
@@ -4420,9 +4420,9 @@
}
},
"node_modules/purgecss/node_modules/brace-expansion": {
- "version": "2.0.1",
- "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.1.tgz",
- "integrity": "sha512-XnAIvQ8eM+kC6aULx6wuQiwVsnzsi9d3WxzV3FpWTGA19F621kwdbsAcFKXgKUHZWsy+mY6iL1sHTxWEFCytDA==",
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.2.tgz",
+ "integrity": "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==",
"license": "MIT",
"dependencies": {
"balanced-match": "^1.0.0"
diff --git a/dojo/settings/settings.dist.py b/dojo/settings/settings.dist.py
index 19230259bb7..e8276c28c18 100644
--- a/dojo/settings/settings.dist.py
+++ b/dojo/settings/settings.dist.py
@@ -815,8 +815,8 @@ def generate_url(scheme, double_slashes, user, password, host, port, path, param
REST_FRAMEWORK["DEFAULT_AUTHENTICATION_CLASSES"] += ("rest_framework.authentication.TokenAuthentication",)
SPECTACULAR_SETTINGS = {
- "TITLE": "Defect Dojo API v2",
- "DESCRIPTION": "Defect Dojo - Open Source vulnerability Management made easy. Prefetch related parameters/responses not yet in the schema.",
+ "TITLE": "DefectDojo API v2",
+ "DESCRIPTION": "DefectDojo - Open Source vulnerability Management made easy. Prefetch related parameters/responses not yet in the schema.",
"VERSION": __version__,
"SCHEMA_PATH_PREFIX": "/api/v2",
# OTHER SETTINGS
diff --git a/dojo/tools/blackduck/importer.py b/dojo/tools/blackduck/importer.py
index 1420a639e73..7273770c6ec 100644
--- a/dojo/tools/blackduck/importer.py
+++ b/dojo/tools/blackduck/importer.py
@@ -19,11 +19,10 @@ def parse_findings(self, report: Path) -> Iterable[BlackduckFinding]:
class BlackduckImporter(Importer):
def parse_findings(self, report: Path) -> Iterable[BlackduckFinding]:
- if not issubclass(type(report), Path):
- report = Path(report.temporary_file_path())
-
- if zipfile.is_zipfile(str(report)):
+ if zipfile.is_zipfile(report):
+ report.seek(0) # rewind after the check
return self._process_zipfile(report)
+ report.seek(0) # rewind after the check
return self._process_csvfile(report)
def _process_csvfile(self, report: Path):
@@ -31,10 +30,11 @@ def _process_csvfile(self, report: Path):
If passed in a regular security.csv, process it.
No file information then.
"""
- security_issues = {}
- with report.open(encoding="utf-8") as f:
- security_issues = self.__partition_by_key(f)
+ content = report.read()
+ if isinstance(content, bytes):
+ content = content.decode("utf-8")
+ security_issues = self.__partition_by_key(io.StringIO(content))
project_ids = set(security_issues.keys())
return self._process_project_findings(
project_ids, security_issues, None,
@@ -48,7 +48,7 @@ def _process_zipfile(self, report):
files = {}
security_issues = {}
- with zipfile.ZipFile(str(report)) as zipf:
+ with zipfile.ZipFile(report) as zipf:
for full_file_name in zipf.namelist():
file_name = full_file_name.split("/")[-1]
# Backwards compatibility, newer versions of Blackduck have a source file rather
diff --git a/dojo/tools/blackduck_binary_analysis/importer.py b/dojo/tools/blackduck_binary_analysis/importer.py
index fe1fce3a14a..3e737fb4dd0 100644
--- a/dojo/tools/blackduck_binary_analysis/importer.py
+++ b/dojo/tools/blackduck_binary_analysis/importer.py
@@ -1,4 +1,5 @@
import csv
+import io
from abc import ABC, abstractmethod
from collections import defaultdict
from collections.abc import Iterable
@@ -17,24 +18,18 @@ def parse_findings(self, report: Path) -> Iterable[BlackduckBinaryAnalysisFindin
class BlackduckBinaryAnalysisImporter(Importer):
def parse_findings(self, report: Path) -> Iterable[BlackduckBinaryAnalysisFinding]:
orig_report_name = Path(report.name)
- if not issubclass(type(report), Path):
- report = Path(report.temporary_file_path())
-
- return self._process_csvfile(report, orig_report_name)
-
- def _process_csvfile(self, report: Path, orig_report_name):
- """If passed a CSV file, process."""
- vulnerabilities = {}
- with report.open(encoding="utf-8") as f:
- vulnerabilities = self.__partition_by_key(f)
+ content = report.read()
+ if isinstance(content, bytes):
+ content = content.decode("utf-8")
+ vulnerabilities = self.__partition_by_key(io.StringIO(content))
sha1_hash_keys = set(vulnerabilities.keys())
return self._process_vuln_results(
- sha1_hash_keys, report, orig_report_name, vulnerabilities,
+ sha1_hash_keys, orig_report_name, vulnerabilities,
)
def _process_vuln_results(
- self, sha1_hash_keys, report, orig_report_name, vulnerabilities,
+ self, sha1_hash_keys, orig_report_name, vulnerabilities,
):
"""Process findings for each project."""
for sha1_hash_key in sha1_hash_keys:
diff --git a/dojo/tools/blackduck_component_risk/importer.py b/dojo/tools/blackduck_component_risk/importer.py
index 25dab016390..56f04f73eb0 100644
--- a/dojo/tools/blackduck_component_risk/importer.py
+++ b/dojo/tools/blackduck_component_risk/importer.py
@@ -26,9 +26,8 @@ def parse_findings(self, report: Path) -> (dict, dict, dict):
:param report: Path to zip file
:return: ( {component_id:details} , {component_id:[vulns]}, {component_id:[source]} )
"""
- if not issubclass(type(report), Path):
- report = Path(report.temporary_file_path())
- if zipfile.is_zipfile(str(report)):
+ if zipfile.is_zipfile(report):
+ report.seek(0) # rewind after the check
return self._process_zipfile(report)
msg = f"File {report} not a zip!"
raise ValueError(msg)
@@ -43,7 +42,7 @@ def _process_zipfile(self, report: Path) -> (dict, dict, dict):
components = {}
source = {}
try:
- with zipfile.ZipFile(str(report)) as zipf:
+ with zipfile.ZipFile(report) as zipf:
c_file = False
s_file = False
for full_file_name in zipf.namelist():
diff --git a/dojo/tools/mend/parser.py b/dojo/tools/mend/parser.py
index 51688698fc1..c71ed89e2f3 100644
--- a/dojo/tools/mend/parser.py
+++ b/dojo/tools/mend/parser.py
@@ -208,9 +208,12 @@ def _build_common_output(node, lib_name=None):
impact=impact if impact is not None else None,
steps_to_reproduce="**Locations Found**: " + ", ".join(locations) if locations is not None else None,
kev_date=kev_date if kev_date is not None else None,
- known_exploited=known_exploited if known_exploited is not None else None,
- ransomware_used=ransomware_used if ransomware_used is not None else None,
)
+ # only overwrite default values if they are not None #12989
+ if known_exploited is not None:
+ new_finding.known_exploited = known_exploited
+ if ransomware_used is not None:
+ new_finding.ransomware_used = ransomware_used
if cve:
new_finding.unsaved_vulnerability_ids = [cve]
diff --git a/helm/defectdojo/Chart.yaml b/helm/defectdojo/Chart.yaml
index da5deb9a832..0268c70462f 100644
--- a/helm/defectdojo/Chart.yaml
+++ b/helm/defectdojo/Chart.yaml
@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: "2.50.0-dev"
description: A Helm chart for Kubernetes to install DefectDojo
name: defectdojo
-version: 1.6.204-dev
+version: 1.6.205-dev
icon: https://www.defectdojo.org/img/favicon.ico
maintainers:
- name: madchap
diff --git a/unittests/tools/test_blackduck_binary_analysis_parser.py b/unittests/tools/test_blackduck_binary_analysis_parser.py
index 22a810cfce7..d378de0567d 100644
--- a/unittests/tools/test_blackduck_binary_analysis_parser.py
+++ b/unittests/tools/test_blackduck_binary_analysis_parser.py
@@ -6,55 +6,55 @@
class TestBlackduckBinaryAnalysisParser(DojoTestCase):
def test_parse_no_vulns(self):
- testfile = get_unit_tests_scans_path("blackduck_binary_analysis") / "no_vuln.csv"
- parser = BlackduckBinaryAnalysisParser()
- findings = parser.get_findings(testfile, Test())
- self.assertEqual(0, len(findings))
+ with (get_unit_tests_scans_path("blackduck_binary_analysis") / "no_vuln.csv").open(encoding="utf-8") as testfile:
+ parser = BlackduckBinaryAnalysisParser()
+ findings = parser.get_findings(testfile, Test())
+ self.assertEqual(0, len(findings))
def test_parse_one_vuln(self):
- testfile = get_unit_tests_scans_path("blackduck_binary_analysis") / "one_vuln.csv"
- parser = BlackduckBinaryAnalysisParser()
- findings = parser.get_findings(testfile, Test())
- self.assertEqual(1, len(findings))
- for finding in findings:
- self.assertIsNotNone(finding.title)
- self.assertEqual(
- "instrument.dll: zlib 1.2.13 Vulnerable to CVE-2023-45853",
- finding.title,
- )
-
- self.assertIsNotNone(finding.description)
- self.assertIsNotNone(finding.severity)
- self.assertEqual("Critical", finding.severity)
-
- self.assertIsNotNone(finding.component_name)
- self.assertEqual("zlib", finding.component_name)
-
- self.assertIsNotNone(finding.component_version)
- self.assertEqual("1.2.13", finding.component_version)
-
- self.assertIsNotNone(finding.file_path)
- self.assertEqual(
- "JRE.msi:JRE.msi-30276-90876123.cab:instrument.dll",
- finding.file_path,
- )
-
- self.assertIsNotNone(finding.vuln_id_from_tool)
- self.assertEqual("CVE-2023-45853", finding.vuln_id_from_tool)
-
- self.assertIsNotNone(finding.unique_id_from_tool)
+ with (get_unit_tests_scans_path("blackduck_binary_analysis") / "one_vuln.csv").open(encoding="utf-8") as testfile:
+ parser = BlackduckBinaryAnalysisParser()
+ findings = parser.get_findings(testfile, Test())
+ self.assertEqual(1, len(findings))
+ for finding in findings:
+ self.assertIsNotNone(finding.title)
+ self.assertEqual(
+ "instrument.dll: zlib 1.2.13 Vulnerable to CVE-2023-45853",
+ finding.title,
+ )
+
+ self.assertIsNotNone(finding.description)
+ self.assertIsNotNone(finding.severity)
+ self.assertEqual("Critical", finding.severity)
+
+ self.assertIsNotNone(finding.component_name)
+ self.assertEqual("zlib", finding.component_name)
+
+ self.assertIsNotNone(finding.component_version)
+ self.assertEqual("1.2.13", finding.component_version)
+
+ self.assertIsNotNone(finding.file_path)
+ self.assertEqual(
+ "JRE.msi:JRE.msi-30276-90876123.cab:instrument.dll",
+ finding.file_path,
+ )
+
+ self.assertIsNotNone(finding.vuln_id_from_tool)
+ self.assertEqual("CVE-2023-45853", finding.vuln_id_from_tool)
+
+ self.assertIsNotNone(finding.unique_id_from_tool)
def test_parse_many_vulns(self):
- testfile = get_unit_tests_scans_path("blackduck_binary_analysis") / "many_vulns.csv"
- parser = BlackduckBinaryAnalysisParser()
- findings = parser.get_findings(testfile, Test())
- self.assertEqual(5, len(findings))
- for finding in findings:
- self.assertIsNotNone(finding.title)
- self.assertIsNotNone(finding.description)
- self.assertIsNotNone(finding.severity)
- self.assertIsNotNone(finding.component_name)
- self.assertIsNotNone(finding.component_version)
- self.assertIsNotNone(finding.file_path)
- self.assertIsNotNone(finding.vuln_id_from_tool)
- self.assertIsNotNone(finding.unique_id_from_tool)
+ with (get_unit_tests_scans_path("blackduck_binary_analysis") / "many_vulns.csv").open(encoding="utf-8") as testfile:
+ parser = BlackduckBinaryAnalysisParser()
+ findings = parser.get_findings(testfile, Test())
+ self.assertEqual(5, len(findings))
+ for finding in findings:
+ self.assertIsNotNone(finding.title)
+ self.assertIsNotNone(finding.description)
+ self.assertIsNotNone(finding.severity)
+ self.assertIsNotNone(finding.component_name)
+ self.assertIsNotNone(finding.component_version)
+ self.assertIsNotNone(finding.file_path)
+ self.assertIsNotNone(finding.vuln_id_from_tool)
+ self.assertIsNotNone(finding.unique_id_from_tool)
diff --git a/unittests/tools/test_blackduck_component_risk_parser.py b/unittests/tools/test_blackduck_component_risk_parser.py
index 605c738281d..5ae931bc1f0 100644
--- a/unittests/tools/test_blackduck_component_risk_parser.py
+++ b/unittests/tools/test_blackduck_component_risk_parser.py
@@ -6,7 +6,7 @@
class TestBlackduckComponentRiskParser(DojoTestCase):
def test_blackduck_enhanced_zip_upload(self):
- testfile = get_unit_tests_scans_path("blackduck_component_risk") / "blackduck_hub_component_risk.zip"
- parser = BlackduckComponentRiskParser()
- findings = parser.get_findings(testfile, Test())
- self.assertEqual(12, len(findings))
+ with (get_unit_tests_scans_path("blackduck_component_risk") / "blackduck_hub_component_risk.zip").open(mode="rb") as testfile:
+ parser = BlackduckComponentRiskParser()
+ findings = parser.get_findings(testfile, Test())
+ self.assertEqual(12, len(findings))
diff --git a/unittests/tools/test_blackduck_parser.py b/unittests/tools/test_blackduck_parser.py
index aaa9b723185..27dfd9c3f34 100644
--- a/unittests/tools/test_blackduck_parser.py
+++ b/unittests/tools/test_blackduck_parser.py
@@ -6,49 +6,49 @@
class TestBlackduckHubParser(DojoTestCase):
def test_blackduck_csv_parser_has_no_finding(self):
- testfile = get_unit_tests_scans_path("blackduck") / "no_vuln.csv"
- parser = BlackduckParser()
- findings = parser.get_findings(testfile, Test())
- self.assertEqual(0, len(findings))
+ with (get_unit_tests_scans_path("blackduck") / "no_vuln.csv").open(encoding="utf-8") as testfile:
+ parser = BlackduckParser()
+ findings = parser.get_findings(testfile, Test())
+ self.assertEqual(0, len(findings))
def test_blackduck_csv_parser_has_one_finding(self):
- testfile = get_unit_tests_scans_path("blackduck") / "one_vuln.csv"
- parser = BlackduckParser()
- findings = parser.get_findings(testfile, Test())
- self.assertEqual(1, len(findings))
+ with (get_unit_tests_scans_path("blackduck") / "one_vuln.csv").open(encoding="utf-8") as testfile:
+ parser = BlackduckParser()
+ findings = parser.get_findings(testfile, Test())
+ self.assertEqual(1, len(findings))
def test_blackduck_csv_parser_has_many_findings(self):
- testfile = get_unit_tests_scans_path("blackduck") / "many_vulns.csv"
- parser = BlackduckParser()
- findings = parser.get_findings(testfile, Test())
- self.assertEqual(24, len(findings))
- findings = list(findings)
- self.assertEqual(1, len(findings[10].unsaved_vulnerability_ids))
- self.assertEqual("CVE-2007-3386", findings[10].unsaved_vulnerability_ids[0])
- self.assertEqual(findings[4].component_name, "Apache Tomcat")
- self.assertEqual(findings[2].component_name, "Apache HttpComponents Client")
- self.assertEqual(findings[4].component_version, "5.5.23")
- self.assertEqual(findings[2].component_version, "4.5.2")
+ with (get_unit_tests_scans_path("blackduck") / "many_vulns.csv").open(encoding="utf-8") as testfile:
+ parser = BlackduckParser()
+ findings = parser.get_findings(testfile, Test())
+ self.assertEqual(24, len(findings))
+ findings = list(findings)
+ self.assertEqual(1, len(findings[10].unsaved_vulnerability_ids))
+ self.assertEqual("CVE-2007-3386", findings[10].unsaved_vulnerability_ids[0])
+ self.assertEqual(findings[4].component_name, "Apache Tomcat")
+ self.assertEqual(findings[2].component_name, "Apache HttpComponents Client")
+ self.assertEqual(findings[4].component_version, "5.5.23")
+ self.assertEqual(findings[2].component_version, "4.5.2")
def test_blackduck_csv_parser_new_format_has_many_findings(self):
- testfile = get_unit_tests_scans_path("blackduck") / "many_vulns_new_format.csv"
- parser = BlackduckParser()
- findings = parser.get_findings(testfile, Test())
- findings = list(findings)
- self.assertEqual(9, len(findings))
- self.assertEqual(findings[0].component_name, "kryo")
- self.assertEqual(findings[2].component_name, "jackson-databind")
- self.assertEqual(findings[0].component_version, "3.0.3")
- self.assertEqual(findings[2].component_version, "2.9.9.3")
+ with (get_unit_tests_scans_path("blackduck") / "many_vulns_new_format.csv").open(encoding="utf-8") as testfile:
+ parser = BlackduckParser()
+ findings = parser.get_findings(testfile, Test())
+ findings = list(findings)
+ self.assertEqual(9, len(findings))
+ self.assertEqual(findings[0].component_name, "kryo")
+ self.assertEqual(findings[2].component_name, "jackson-databind")
+ self.assertEqual(findings[0].component_version, "3.0.3")
+ self.assertEqual(findings[2].component_version, "2.9.9.3")
def test_blackduck_enhanced_has_many_findings(self):
- testfile = get_unit_tests_scans_path("blackduck") / "blackduck_enhanced_py3_unittest.zip"
- parser = BlackduckParser()
- findings = parser.get_findings(testfile, Test())
- self.assertEqual(11, len(findings))
+ with (get_unit_tests_scans_path("blackduck") / "blackduck_enhanced_py3_unittest.zip").open(mode="rb") as testfile:
+ parser = BlackduckParser()
+ findings = parser.get_findings(testfile, Test())
+ self.assertEqual(11, len(findings))
def test_blackduck_enhanced_zip_upload(self):
- testfile = get_unit_tests_scans_path("blackduck") / "blackduck_enhanced_py3_unittest_v2.zip"
- parser = BlackduckParser()
- findings = parser.get_findings(testfile, Test())
- self.assertEqual(11, len(findings))
+ with (get_unit_tests_scans_path("blackduck") / "blackduck_enhanced_py3_unittest_v2.zip").open(mode="rb") as testfile:
+ parser = BlackduckParser()
+ findings = parser.get_findings(testfile, Test())
+ self.assertEqual(11, len(findings))