Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/content/en/about_defectdojo/about_docs.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@ Other guides for working with an Open-Source install:

If you run into trouble with an Open Source install, we highly recommend asking questions on the [OWASP Slack](https://owasp.org/slack/invite). Our community members are active on the **# defectdojo** channel and can help you with issues you’re facing.

Looking for cool DefectDojo laptop stickers? As a thank you for being a part of the DefectDojo community, you can sign up to get some free DefectDojo stickers. For more information, check out [this link](https://defectdojo.com/defectdojo-sticker-request).

### Online Demo

A running example of DefectDojo (Open-Source Edition) is available on [our demo server](https://demo.defectdojo.org), using the credentials `admin` / `1Defectdojo@demo#appsec`. The demo server is refreshed regularly and provisioned with some sample data.
Expand Down
5 changes: 5 additions & 0 deletions docs/content/en/changelog/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,11 @@ For Open Source release notes, please see the [Releases page on GitHub](https://

## Sept 2025: v2.50

### Sept 15, 2025: v2.50.2

* **(Pro UI)** Added Any/All status filtering. Filtering by status allows you to apply either AND (inner join) logic, or OR (outer join) logic to the filter.
* **(Pro UI)** Added Contact Support form for On-Premise installs.

### Sept 9, 2025: v2.50.1

* **(Tools)** Removed CSV limit for Qualys HackerGuardian
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ title: 'Generic Findings Import'
toc_hide: true
---

Import Generic findings in CSV or JSON format.
Generic Findings Import can be used to import any report in CSV or JSON format.

Attributes supported for CSV:
### Supported Attributes (CSV)

- Date: Date of the finding in mm/dd/yyyy format.
- Title: Title of the finding
Expand Down Expand Up @@ -37,6 +37,8 @@ The CSV expects a header row with the names of the attributes.

Date fields are parsed using [dateutil.parse](https://dateutil.readthedocs.io/en/stable/parser.html) supporting a variety of formats such a YYYY-MM-DD or ISO-8601.

### Supported Attributes (JSON)

The list of supported fields in JSON format:

- title: **Required.** String
Expand Down Expand Up @@ -93,7 +95,7 @@ The list of supported fields in JSON format:
- ransomware_used: Bool
- fix_available: Bool

Example of JSON format:
### Example JSON

```JSON
{
Expand Down
Original file line number Diff line number Diff line change
@@ -1,136 +1,18 @@
---
title: "Generic Findings Import"
title: "Using Generic Findings Import"
toc_hide: true
weight: 2
---

You can use Generic Findings Import as a method to ingest JSON or CSV files into DefectDojo which are not already in the supported parsers list.
Open-source and Pro users can use Generic Findings Import as a method to ingest JSON or CSV files into DefectDojo which are not already in the supported Tools list.

Files uploaded using Generic Findings Import must conform to the accepted format with respect to CSV column headers / JSON attributes.
Using Generic Findings Import will create a new Test Type in your DefectDojo instance called "`{The Name Of Your Test}` (Generic Findings Import)". For example, this JSON content will result in a Test Type called "Example Report (Generic Findings Import)":

These attributes are supported for CSV:

- Date: Date of the finding in mm/dd/yyyy format.
- Title: Title of the finding
- CweId: Cwe identifier, must be an integer value.
- epss_score: The probability of exploitation in the next 30 days, must be a float value between 0 and 1.0.
- epss_percentile: The proportion of all scored vulnerabilities with the same or a lower EPSS score, must be a float value between 0 and 1.0.
- Url: Url associated with the finding.
- Severity: Severity of the finding. Must be one of Info, Low, Medium, High, or Critical.
- Description: Description of the finding. Can be multiple lines if enclosed in double quotes.
- Mitigation: Possible Mitigations for the finding. Can be multiple lines if enclosed in double quotes.
- Impact: Detailed impact of the finding. Can be multiple lines if enclosed in double quotes.
- References: References associated with the finding. Can be multiple lines if enclosed in double quotes.
- Active: Indicator if the finding is active. Must be empty, TRUE or FALSE
- Verified: Indicator if the finding has been verified. Must be empty, TRUE, or FALSE
- FalsePositive: Indicator if the finding is a false positive. Must be TRUE, or FALSE.
- Duplicate: Indicator if the finding is a duplicate. Must be TRUE, or FALSE

The CSV expects a header row with the names of the attributes.

Example of JSON format:

```JSON
{
"findings": [
{
"title": "test title with endpoints as dict",
"description": "Some very long description with\n\n some UTF-8 chars à qu'il est beau",
"severity": "Medium",
"mitigation": "Some mitigation",
"date": "2021-01-06",
"cve": "CVE-2020-36234",
"cwe": 261,
"cvssv3": "CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:C/C:L/I:L/A:N",
"file_path": "src/first.cpp",
"line": 13,
"endpoints": [
{
"host": "exemple.com"
}
]
},
{
"title": "test title with endpoints as strings",
"description": "Some very long description with\n\n some UTF-8 chars à qu'il est beau2",
"severity": "Critical",
"mitigation": "Some mitigation",
"date": "2021-01-06",
"cve": "CVE-2020-36235",
"cwe": 287,
"cvssv3": "CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:C/C:L/I:L/A:N",
"file_path": "src/two.cpp",
"line": 135,
"endpoints": [
"http://urlfiltering.paloaltonetworks.com/test-command-and-control",
"https://urlfiltering.paloaltonetworks.com:2345/test-pest"
]
},
{
"title": "test title",
"description": "Some very long description with\n\n some UTF-8 chars à qu'il est beau2",
"severity": "Critical",
"mitigation": "Some mitigation",
"date": "2021-01-06",
"cve": "CVE-2020-36236",
"cwe": 287,
"cvssv3": "CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:C/C:L/I:L/A:N",
"file_path": "src/threeeeeeeeee.cpp",
"line": 1353
}
]
}
```

This parser supports an attributes that accept files as Base64 strings. These files are attached to the respective findings.

Example:

```JSON
{
"name": "My wonderful report",
"findings": [
{
"title": "Vuln with image",
"description": "Some very long description",
"severity": "Medium",
"files": [
{
"title": "Screenshot from 2017-04-10 16-54-19.png",
"data": "iVBORw0KGgoAAAANSUhEUgAABWgAAAK0CAIAAAARSkPJAAAAA3N<...>TkSuQmCC"
}
]
}
]
}
```

This parser supports some additional attributes to be able to define custom `TestTypes` as well as influencing some meta fields on the `Test`:

- `name`: The internal name of the tool you are using. This is primarily informational, and used for reading the report manually.
- `type`: The name of the test type to create in DefectDojo with the suffix of `(Generic Findings Import)`. The suffix is an important identifier for future users attempting to identify the test type to supply when importing new reports. This value is very important when fetching the correct test type to import findings into, so be sure to keep the `type` consistent from import to import! As an example, a report submitted with a `type` of `Internal Company Tool` will produce a test type in DefectDojo with the title `Internal Company Tool (Generic Findings Import)`. With this newly created test type, you can define custom `HASHCODE_FIELDS` or `DEDUPLICATION_ALGORITHM` in the settings.
- `version`: The version of the tool you are using. This is primarily informational, and is used for reading the report manually and tracking format changes from version to version.
- `description`: A brief description of the test. This could be an explanation of what the tool is reporting, where the tools is maintained, who the point of contact is for the tool when issues arise, or anything in between.
- `static_tool`: Dictates that tool used is running static analysis methods to discover vulnerabilities.
- `dynamic_tool`: Dictates that tool used is running dynamic analysis methods to discover vulnerabilities.
- `soc`: Dictates that tool is used for reporting alerts from a soc (Pro Edition Only).

Example:

```JSON
{
"name": "My wonderful report",
"type": "My custom Test type",
"version": "1.0.5",
"description": "A unicorn tool that is capable of static analysis, dynamic analysis, and even capturing soc alerts!",
"static_tool": true,
"dynamic_tool": true,
"soc": true,
"findings": [
]
"name": "Example Report",
"findings": []
}
```

### Sample Scan Data
DefectDojo Pro users can also consider using the [Universal Parser](../universal_parser), a tool which allows for highly customizable JSON, XML and CSV imports.

Sample Generic Findings Import scans can be found [here](https://github.com/DefectDojo/django-DefectDojo/tree/master/unittests/scans/generic).
For more information on supported parameters for Generic Findings Import, see the [Parser Guide](../file/generic)
4 changes: 4 additions & 0 deletions dojo/engagement/views.py
Original file line number Diff line number Diff line change
Expand Up @@ -1552,6 +1552,10 @@ def engagement_ics(request, eid):
eng = get_object_or_404(Engagement, id=eid)
start_date = datetime.combine(eng.target_start, datetime.min.time())
end_date = datetime.combine(eng.target_end, datetime.max.time())
if timezone.is_naive(start_date):
start_date = timezone.make_aware(start_date)
if timezone.is_naive(end_date):
end_date = timezone.make_aware(end_date)
uid = f"dojo_eng_{eng.id}_{eng.product.id}"
cal = get_cal_event(
start_date,
Expand Down
68 changes: 34 additions & 34 deletions dojo/filters.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
RangeFilter,
)
from django_filters import rest_framework as filters
from django_filters.filters import ChoiceFilter, _truncate
from django_filters.filters import ChoiceFilter
from drf_spectacular.types import OpenApiTypes
from drf_spectacular.utils import extend_schema_field
from polymorphic.base import ManagerInheritanceWarning
Expand Down Expand Up @@ -92,7 +92,7 @@
from dojo.risk_acceptance.queries import get_authorized_risk_acceptances
from dojo.test.queries import get_authorized_tests
from dojo.user.queries import get_authorized_users
from dojo.utils import get_system_setting, is_finding_groups_enabled
from dojo.utils import get_system_setting, is_finding_groups_enabled, truncate_timezone_aware

logger = logging.getLogger(__name__)

Expand Down Expand Up @@ -194,8 +194,8 @@ def filter(self, qs, value):
if earliest_finding is not None:
start_date = datetime.combine(
earliest_finding.date, datetime.min.time()).replace(tzinfo=tzinfo())
self.start_date = _truncate(start_date - timedelta(days=1))
self.end_date = _truncate(now() + timedelta(days=1))
self.start_date = truncate_timezone_aware(start_date - timedelta(days=1))
self.end_date = truncate_timezone_aware(now() + timedelta(days=1))
try:
value = int(value)
except (ValueError, TypeError):
Expand Down Expand Up @@ -654,16 +654,16 @@ class DateRangeFilter(ChoiceFilter):
f"{name}__day": now().day,
})),
2: (_("Past 7 days"), lambda qs, name: qs.filter(**{
f"{name}__gte": _truncate(now() - timedelta(days=7)),
f"{name}__lt": _truncate(now() + timedelta(days=1)),
f"{name}__gte": truncate_timezone_aware(now() - timedelta(days=7)),
f"{name}__lt": truncate_timezone_aware(now() + timedelta(days=1)),
})),
3: (_("Past 30 days"), lambda qs, name: qs.filter(**{
f"{name}__gte": _truncate(now() - timedelta(days=30)),
f"{name}__lt": _truncate(now() + timedelta(days=1)),
f"{name}__gte": truncate_timezone_aware(now() - timedelta(days=30)),
f"{name}__lt": truncate_timezone_aware(now() + timedelta(days=1)),
})),
4: (_("Past 90 days"), lambda qs, name: qs.filter(**{
f"{name}__gte": _truncate(now() - timedelta(days=90)),
f"{name}__lt": _truncate(now() + timedelta(days=1)),
f"{name}__gte": truncate_timezone_aware(now() - timedelta(days=90)),
f"{name}__lt": truncate_timezone_aware(now() + timedelta(days=1)),
})),
5: (_("Current month"), lambda qs, name: qs.filter(**{
f"{name}__year": now().year,
Expand All @@ -673,8 +673,8 @@ class DateRangeFilter(ChoiceFilter):
f"{name}__year": now().year,
})),
7: (_("Past year"), lambda qs, name: qs.filter(**{
f"{name}__gte": _truncate(now() - timedelta(days=365)),
f"{name}__lt": _truncate(now() + timedelta(days=1)),
f"{name}__gte": truncate_timezone_aware(now() - timedelta(days=365)),
f"{name}__lt": truncate_timezone_aware(now() + timedelta(days=1)),
})),
}

Expand All @@ -700,43 +700,43 @@ class DateRangeOmniFilter(ChoiceFilter):
f"{name}__day": now().day,
})),
2: (_("Next 7 days"), lambda qs, name: qs.filter(**{
f"{name}__gte": _truncate(now() + timedelta(days=1)),
f"{name}__lt": _truncate(now() + timedelta(days=7)),
f"{name}__gte": truncate_timezone_aware(now() + timedelta(days=1)),
f"{name}__lt": truncate_timezone_aware(now() + timedelta(days=7)),
})),
3: (_("Next 30 days"), lambda qs, name: qs.filter(**{
f"{name}__gte": _truncate(now() + timedelta(days=1)),
f"{name}__lt": _truncate(now() + timedelta(days=30)),
f"{name}__gte": truncate_timezone_aware(now() + timedelta(days=1)),
f"{name}__lt": truncate_timezone_aware(now() + timedelta(days=30)),
})),
4: (_("Next 90 days"), lambda qs, name: qs.filter(**{
f"{name}__gte": _truncate(now() + timedelta(days=1)),
f"{name}__lt": _truncate(now() + timedelta(days=90)),
f"{name}__gte": truncate_timezone_aware(now() + timedelta(days=1)),
f"{name}__lt": truncate_timezone_aware(now() + timedelta(days=90)),
})),
5: (_("Past 7 days"), lambda qs, name: qs.filter(**{
f"{name}__gte": _truncate(now() - timedelta(days=7)),
f"{name}__lt": _truncate(now() + timedelta(days=1)),
f"{name}__gte": truncate_timezone_aware(now() - timedelta(days=7)),
f"{name}__lt": truncate_timezone_aware(now() + timedelta(days=1)),
})),
6: (_("Past 30 days"), lambda qs, name: qs.filter(**{
f"{name}__gte": _truncate(now() - timedelta(days=30)),
f"{name}__lt": _truncate(now() + timedelta(days=1)),
f"{name}__gte": truncate_timezone_aware(now() - timedelta(days=30)),
f"{name}__lt": truncate_timezone_aware(now() + timedelta(days=1)),
})),
7: (_("Past 90 days"), lambda qs, name: qs.filter(**{
f"{name}__gte": _truncate(now() - timedelta(days=90)),
f"{name}__lt": _truncate(now() + timedelta(days=1)),
f"{name}__gte": truncate_timezone_aware(now() - timedelta(days=90)),
f"{name}__lt": truncate_timezone_aware(now() + timedelta(days=1)),
})),
8: (_("Current month"), lambda qs, name: qs.filter(**{
f"{name}__year": now().year,
f"{name}__month": now().month,
})),
9: (_("Past year"), lambda qs, name: qs.filter(**{
f"{name}__gte": _truncate(now() - timedelta(days=365)),
f"{name}__lt": _truncate(now() + timedelta(days=1)),
f"{name}__gte": truncate_timezone_aware(now() - timedelta(days=365)),
f"{name}__lt": truncate_timezone_aware(now() + timedelta(days=1)),
})),
10: (_("Current year"), lambda qs, name: qs.filter(**{
f"{name}__year": now().year,
})),
11: (_("Next year"), lambda qs, name: qs.filter(**{
f"{name}__gte": _truncate(now() + timedelta(days=1)),
f"{name}__lt": _truncate(now() + timedelta(days=365)),
f"{name}__gte": truncate_timezone_aware(now() + timedelta(days=1)),
f"{name}__lt": truncate_timezone_aware(now() + timedelta(days=365)),
})),
}

Expand Down Expand Up @@ -818,8 +818,8 @@ def any(self, qs, name):
if earliest_finding is not None:
start_date = datetime.combine(
earliest_finding.date, datetime.min.time()).replace(tzinfo=tzinfo())
self.start_date = _truncate(start_date - timedelta(days=1))
self.end_date = _truncate(now() + timedelta(days=1))
self.start_date = truncate_timezone_aware(start_date - timedelta(days=1))
self.end_date = truncate_timezone_aware(now() + timedelta(days=1))
return qs.all()
return None

Expand All @@ -839,8 +839,8 @@ def current_year(self, qs, name):
})

def past_x_days(self, qs, name, days):
self.start_date = _truncate(now() - timedelta(days=days))
self.end_date = _truncate(now() + timedelta(days=1))
self.start_date = truncate_timezone_aware(now() - timedelta(days=days))
self.end_date = truncate_timezone_aware(now() + timedelta(days=1))
return qs.filter(**{
f"{name}__gte": self.start_date,
f"{name}__lt": self.end_date,
Expand Down Expand Up @@ -884,8 +884,8 @@ def filter(self, qs, value):
if earliest_finding is not None:
start_date = datetime.combine(
earliest_finding.date, datetime.min.time()).replace(tzinfo=tzinfo())
self.start_date = _truncate(start_date - timedelta(days=1))
self.end_date = _truncate(now() + timedelta(days=1))
self.start_date = truncate_timezone_aware(start_date - timedelta(days=1))
self.end_date = truncate_timezone_aware(now() + timedelta(days=1))
try:
value = int(value)
except (ValueError, TypeError):
Expand Down
1 change: 1 addition & 0 deletions dojo/finding/views.py
Original file line number Diff line number Diff line change
Expand Up @@ -2651,6 +2651,7 @@ def finding_bulk_update_all(request, pid=None):
find.false_p = form.cleaned_data["false_p"]
find.out_of_scope = form.cleaned_data["out_of_scope"]
find.is_mitigated = form.cleaned_data["is_mitigated"]
find.under_review = form.cleaned_data["under_review"]
find.last_reviewed = timezone.now()
find.last_reviewed_by = request.user

Expand Down
Loading
Loading