diff --git a/docs/_includes/_cvss4_question.md b/docs/_includes/_cvss4_question.md new file mode 100644 index 00000000..d8b2cd6a --- /dev/null +++ b/docs/_includes/_cvss4_question.md @@ -0,0 +1,5 @@ +!!! question inline end "What about CVSS v4?" + + Since this documentation was written, CVSS v4 has been released. + While we plan to address CVSS v4 in a future update to the SSVC documentation, we are + retaining our CVSS v3.1 content because it remains the most widely used version of CVSS. diff --git a/docs/_includes/_scrollable_table.md b/docs/_includes/_scrollable_table.md new file mode 100644 index 00000000..640654d8 --- /dev/null +++ b/docs/_includes/_scrollable_table.md @@ -0,0 +1,3 @@ +!!! tip "Scroll to the right to see the full table" + + The table below is scrollable to the right. diff --git a/docs/howto/bootstrap/collect.md b/docs/howto/bootstrap/collect.md index 9bb57d9b..53381718 100644 --- a/docs/howto/bootstrap/collect.md +++ b/docs/howto/bootstrap/collect.md @@ -59,6 +59,7 @@ That caveat notwithstanding, some automation is possible. At least, for those vulnerabilities that are not “automatically” PoC-ready, such as on-path attackers for TLS or network replays. + Some of the decision points require a substantial upfront analysis effort to gather risk assessment or organizational data. However, once gathered, this information can be efficiently reused across many vulnerabilities and only refreshed @@ -66,17 +67,19 @@ occasionally. !!! example "Evidence of Mission Impact" - An obvious example of this is the mission impact decision point. - To answer this, a deployer must analyze their essential functions, how they interrelate, and how they are supported. + An obvious example of this is the [Mission Impact](../../reference/decision_points/mission_impact.md) decision point. + To answer this, a deployer must analyze their Mission Essential Functions (MEFs), how they interrelate, and how they are supported. + -!!! example "Evidence of Exposure" +!!! example "Evidence of System Exposure" - Exposure is similar; answering that decision point requires an asset inventory, adequate understanding of the network + [System Exposure](../../reference/decision_points/system_exposure.md) is similar; answering that decision point requires an asset inventory, adequate understanding of the network topology, and a view of the enforced security controls. Independently operated scans, such as Shodan or Shadowserver, may play a role in evaluating exposure, but the entire exposure question cannot be reduced to a binary question of whether an organization’s assets appear in such databases. -Once the deployer has the situational awareness to understand MEFs or exposure, selecting the answer for each individual + +Once the deployer has the situational awareness to understand their Mission Essential Functions or System Exposure, selecting the answer for each individual vulnerability is usually straightforward. Stakeholders who use the prioritization method should consider releasing the priority with which they handled the @@ -94,36 +97,47 @@ deployer may want to use that information to favor the latter. In the case where no information is available or the organization has not yet matured its initial situational analysis, we can suggest something like defaults for some decision points. -!!! tip "Default Exposure Values" +!!! tip "Default Exploitation Values" + + [*Exploitation*](../../reference/decision_points/exploitation.md) needs no special default; if adequate searches are made for exploit code and none is + found, the answer is [*none*](../../reference/decision_points/exploitation.md). + +!!! tip "Default System Exposure Values" If the deployer does not know their exposure, that means they do not know where the devices are or how they are controlled, so they should assume [*System Exposure*](../../reference/decision_points/system_exposure.md) is [*open*](../../reference/decision_points/system_exposure.md). + +!!! tip "Default Automatable Values" + + If nothing is known about [*Automatable*](../../reference/decision_points/automatable.md), the safer answer to assume is [*yes*](../../reference/decision_points/automatable.md). + [*Value Density*](../../reference/decision_points/value_density.md) should always be answerable; if the product is uncommon, it is probably + [*diffuse*](../../reference/decision_points/value_density.md). + !!! tip "Default Safety Values" If the decision maker knows nothing about the environment in which the device is used, we suggest assuming a - [*major*](../../reference/decision_points/safety_impact.md) [*Safety Impact*](../../reference/decision_points/safety_impact.md). + [*marginal*](../../reference/decision_points/safety_impact.md) [*Safety Impact*](../../reference/decision_points/safety_impact.md). This position is conservative, but software is thoroughly embedded in daily life now, so we suggest that the decision maker provide evidence that no one’s well-being will suffer. -The reach of software exploits is no longer limited to a research network. - !!! tip "Default Mission Impact Values" Similarly, with [*Mission Impact*](../../reference/decision_points/mission_impact.md), the deployer should assume that the software is in use at the organization for a reason, and that it supports essential functions unless they have evidence otherwise. With a total lack of information, assume [*support crippled*](../../reference/decision_points/mission_impact.md) as a default. - [*Exploitation*](../../reference/decision_points/exploitation.md) needs no special default; if adequate searches are made for exploit code and none is - found, the answer is [*none*](../../reference/decision_points/exploitation.md). + +!!! example "Using Defaults" -!!! tip "Default Automatable Values" - - If nothing is known about [*Automatable*](../../reference/decision_points/automatable.md), the safer answer to assume is [*yes*](../../reference/decision_points/automatable.md). - [*Value Density*](../../reference/decision_points/value_density.md) should always be answerable; if the product is uncommon, it is probably - [*diffuse*](../../reference/decision_points/value_density.md). + Applying these defaults to the [deployer decision model](../deployer_tree.md) -The resulting decision set `{none, open, yes, medium}` results in a scheduled patch application in our recommended -deployer tree. + - *Exploitation*: none + - *System Exposure*: open + - *Automatable*: yes + - *Human Impact*: medium (combination of Safety and Mission Impacts) + - *Safety Impact*: marginal + - *Mission Impact*: support crippled + results in a `scheduled` patch application. diff --git a/docs/howto/bootstrap/use.md b/docs/howto/bootstrap/use.md index 282eab86..cbf9a4db 100644 --- a/docs/howto/bootstrap/use.md +++ b/docs/howto/bootstrap/use.md @@ -1,6 +1,6 @@ # Use SSVC -The [preparation](prepare.md) is complete, data is being [collected](collect.md), and now it is time to use +The [preparation](prepare.md) is complete, data has been [collected](collect.md), and now it is time to use SSVC to make decisions about how to respond to vulnerabilities. ```mermaid @@ -79,7 +79,7 @@ flowchart LR The service providers in previous examples might need to notify customers of the vulnerability and schedule maintenance windows to apply patches. Medical device manufacturers might need to develop patches, notify regulators of the vulnerability, and provide - instructions to hospital users for applying patches. + instructions to deployers (e.g., device maintainers at hospitals) for applying patches. SSVC does not prescribe any particular response process, but it does provide a structured way to make decisions within the response process. @@ -149,13 +149,18 @@ The merit in this “list all values” approach emerges when the stakeholder kn !!! example "When Some Values Are Known (But Others Are Not)" - For example, say the analyst knows that [*Value Density*](../../reference/decision_points/value_density.md) is [diffuse](../../reference/decision_points/value_density.md) but does not know the value for [Automatability](../../reference/decision_points/automatable.md). + Extending the previous example, say the analyst knows that [*Value Density*](../../reference/decision_points/value_density.md) is [diffuse](../../reference/decision_points/value_density.md) but does not know the value for [Automatability](../../reference/decision_points/automatable.md). {% include-markdown "../../_generated/decision_points/value_density.md" %} {% include-markdown "../../_generated/decision_points/automatable.md" %} - Then the analyst can usefully restrict [Utility](../../reference/decision_points/utility.md) to one of [laborious](../../reference/decision_points/utility.md) or [efficient](../../reference/decision_points/utility.md). + Therefore they could rule out [super effective](../../reference/decision_points/utility.md) + for [Utility](../../reference/decision_points/utility.md) + but not [laborious](../../reference/decision_points/utility.md) + or [efficient](../../reference/decision_points/utility.md). + In this case, the analyst could usefully restrict [Utility](../../reference/decision_points/utility.md) to one of [laborious](../../reference/decision_points/utility.md) or [efficient](../../reference/decision_points/utility.md) + while leaving [Automatability](../../reference/decision_points/automatable.md) open. As discussed below, information can change over time. Partial information may be, but is not required to be, sharpened over time into a precise value for the decision point. diff --git a/docs/howto/coordination_triage_decision.md b/docs/howto/coordination_triage_decision.md index 4519411c..b85dd4cd 100644 --- a/docs/howto/coordination_triage_decision.md +++ b/docs/howto/coordination_triage_decision.md @@ -109,5 +109,7 @@ height = "700" /> ### Table of Values +{% include-markdown "../_includes/_scrollable_table.md" heading-offset=1 %} + {{ read_csv('coord-triage-options.csv') }} diff --git a/docs/howto/deployer_tree.md b/docs/howto/deployer_tree.md index f53b9522..68853bc0 100644 --- a/docs/howto/deployer_tree.md +++ b/docs/howto/deployer_tree.md @@ -119,6 +119,8 @@ More detail about each of these decision points is provided at the links above, {% include-markdown "../_generated/decision_points/utility.md" %} {% include-markdown "../_generated/decision_points/human_impact.md" %} +In the _Human Impact_ table above, *MEF* stands for Mission Essential Function. + ## Deployer Decision Model Below we provide an example deployer prioritization policy that maps the decision points just listed to the outcomes described above. diff --git a/docs/howto/index.md b/docs/howto/index.md index 91271235..b3331037 100644 --- a/docs/howto/index.md +++ b/docs/howto/index.md @@ -54,7 +54,8 @@ The definition of choices can take a logical form, such as: - THEN priority is *scheduled*. -This example logical statement is captured in [line 35 of the deployer `.csv` file](https://github.com/CERTCC/SSVC/blob/main/data/csvs/deployer-options.csv#L35). + +This example logical statement is captured in [row 34 of the deployer `.csv` file](https://github.com/CERTCC/SSVC/blob/main/data/csvs/deployer-options.csv#L35). There are different formats for capturing these prioritization decisions depending on how and where they are going to be used. In this documentation, we primarily represent a full set of guidance on how one stakeholder will make a decision as a **decision tree**. @@ -64,7 +65,7 @@ fit your organization's needs.
-- :material-stairs: [Bootstrapping SSVC](bootstrap/index.md) +- :material-stairs: [Getting Started with SSVC](bootstrap/index.md) - :material-factory: [Supplier Decision Model](supplier_tree.md) - :material-server-network: [Deployer Decision Model](deployer_tree.md) - :material-steering: [Coordinator Decision Models](coordination_intro.md) diff --git a/docs/reference/decision_points/exploitation.md b/docs/reference/decision_points/exploitation.md index 4b99c7ec..58b398c2 100644 --- a/docs/reference/decision_points/exploitation.md +++ b/docs/reference/decision_points/exploitation.md @@ -30,15 +30,18 @@ The intent of this measure is the present state of exploitation of the vulnerabi ## CWE-IDs for *PoC* + The table below lists CWE-IDs that could be used to mark a vulnerability as *PoC* if the vulnerability is described by the CWE-ID. + !!! example "CWE-295" - For example, CWE-295, [Improper Certificate Validation - ](https://cwe.mitre.org/data/definitions/295.html), and its child CWEs, - describe improper validation of TLS certificates. These CWE-IDs could - always be marked as *PoC* since that meets condition (3) in - the definition. + For example, [CWE-295 Improper Certificate Validation + ](https://cwe.mitre.org/data/definitions/295.html), and its child CWEs, + describe improper validation of TLS certificates. These CWE-IDs could + always be marked as *PoC* since that meets condition (3) in the definition. + +{% include-markdown "../../_includes/_scrollable_table.md" heading-offset=1 %} {{ read_csv('cwe/possible-cwe-with-poc-examples.csv') }} diff --git a/docs/reference/decision_points/technical_impact.md b/docs/reference/decision_points/technical_impact.md index b47fe2d3..f7280b9a 100644 --- a/docs/reference/decision_points/technical_impact.md +++ b/docs/reference/decision_points/technical_impact.md @@ -2,7 +2,7 @@ {% include-markdown "../../_generated/decision_points/technical_impact.md" %} -When evaluating [*Technical Impact*](technical_impact.md), recall the scope definition in the [Scope Section](../../topics/scope.md). +When evaluating *Technical Impact*, recall the scope definition in the [Scope Section](../../topics/scope.md). Total control is relative to the affected component where the vulnerability resides. If a vulnerability discloses authentication or authorization credentials to the system, this information disclosure should also be scored as “total” if those credentials give an adversary total control of the component. @@ -14,7 +14,7 @@ Therefore, if there is a vulnerability then there must be some technical impact. !!! tip "Gathering Information About Technical Impact" - Assessing [*Technical Impact*](technical_impact.md) amounts to assessing the degree of control over the vulnerable component the attacker stands to gain by exploiting the vulnerability. + Assessing *Technical Impact* amounts to assessing the degree of control over the vulnerable component the attacker stands to gain by exploiting the vulnerability. One way to approach this analysis is to ask whether the control gained is *total* or not. If it is not total, it is *partial*. If an answer to one of the following questions is _yes_, then control is *total*. @@ -25,5 +25,7 @@ Therefore, if there is a vulnerability then there must be some technical impact. - does the attacker get an account with full privileges to the vulnerable component (administrator or root user accounts, for example)? This list is an evolving set of heuristics. - If you find a vulnerability that should have [*total*](technical_impact.md) [*Technical Impact*](technical_impact.md) but that does not answer yes to any of these questions, please describe the example and what question we might add to this list in an issue on the [SSVC GitHub](https://github.com/CERTCC/SSVC/issues). + If you find a vulnerability that should have *total* *Technical Impact* but that does not answer yes to any of + these questions, please describe the example and what question we might add to this list in an issue on the + [SSVC GitHub](https://github.com/CERTCC/SSVC/issues). diff --git a/docs/reference/decision_points/utility.md b/docs/reference/decision_points/utility.md index d394b7dc..93e94124 100644 --- a/docs/reference/decision_points/utility.md +++ b/docs/reference/decision_points/utility.md @@ -14,7 +14,9 @@ This is a compound decision point, therefore it is a notational convenience. *Utility* is independent from the state of [*Exploitation*](exploitation.md), which measures whether a set of adversaries have ready access to exploit code or are in fact exploiting the vulnerability. In economic terms, [*Exploitation*](exploitation.md) measures whether the **capital cost** of producing reliable exploit code has been paid or not. *Utility* estimates the **marginal cost** of each exploitation event. -More plainly, *Utility* is about how much an adversary might benefit from a campaign using the vulnerability in question, whereas [*Exploitation*](exploitation.md) is about how easy it would be to start such a campaign or if one is already underway. + +Whereas [*Exploitation*](exploitation.md) is about how easy it would be to start such a campaign or if one is already underway, +*Utility* is about how much an adversary might benefit from a campaign using the vulnerability in question. Heuristically, we base Utility on a combination of the value density of vulnerable components and whether potential exploitation is automatable. This framing makes it easier to analytically derive these categories from a description of the vulnerability and the affected component. diff --git a/docs/topics/asset_management.md b/docs/topics/asset_management.md index 7ed99ae5..95279f87 100644 --- a/docs/topics/asset_management.md +++ b/docs/topics/asset_management.md @@ -32,7 +32,7 @@ Once the organization remediates or mitigates all the high-priority vulnerabilit Asset management and risk management also drive some of the up-front work an organization would need to do to gather some of the necessary information. This situation is not new; an asset owner cannot prioritize which fixes to deploy to its assets if it does not have an accurate inventory of its assets. The organization can pick its choice of tools; there are about 200 asset management tools on the market [@captera]. -Emerging standards like the Software Bill of Materials (SBOM) [@manion2019sbom] would likely reduce the burden on asset management, and organizations should prefer systems which make such information available. +Emerging standards like the [Software Bill of Materials](https://www.cisa.gov/sbom) (SBOM) would likely reduce the burden on asset management, and organizations should prefer systems which make such information available. If an organization does not have an asset management or risk management (see also [Gathering Information About Mission Impact](../reference/decision_points/mission_impact.md)) plan and process in place, then SSVC provides some guidance as to what information is important to vulnerability diff --git a/docs/topics/decision_points_as_bricks.md b/docs/topics/decision_points_as_bricks.md index 227f6c8f..c60bc044 100644 --- a/docs/topics/decision_points_as_bricks.md +++ b/docs/topics/decision_points_as_bricks.md @@ -50,7 +50,7 @@ From that starting point, there are a few different ways you might proceed: ### Follow the examples as provided For many people, this is the experience they want. -They want to build the model exactly as it is pictured on the box, and for that purpose they can follow the instructions provided. +They want to build the model exactly as it is pictured on the box, and so they will simply follow the instructions provided. ### Adapt the examples to suit your needs diff --git a/docs/topics/evaluation_of_draft_trees.md b/docs/topics/evaluation_of_draft_trees.md index 0bec4858..4bde0c5c 100644 --- a/docs/topics/evaluation_of_draft_trees.md +++ b/docs/topics/evaluation_of_draft_trees.md @@ -12,9 +12,9 @@ The method of the pilot test is described in [Pilot Methodogy](#pilot-methodolog For this tabletop refinement, we could not select a mathematically representative set of CVEs. The goal was to select a handful of CVEs that would cover diverse types of vulnerabilities. - The CVEs that we used for our tabletop exercises are CVE-2017-8083, CVE-2019-2712, CVE-2014-5570, and CVE-2017-5753. + The CVEs that we used for our tabletop exercises are [CVE-2017-8083](https://nvd.nist.gov/vuln/detail/CVE-2017-8083), [CVE-2019-2712](https://nvd.nist.gov/vuln/detail/CVE-2019-2712), [CVE-2014-5570](https://nvd.nist.gov/vuln/detail/CVE-2014-5570), and [CVE-2017-5753](https://nvd.nist.gov/vuln/detail/CVE-2017-5753). We discussed each one from the perspective of supplier and deployer. - We evaluated CVE-2017-8083 twice because our understanding and descriptions had changed materially after the first three CVEs (six evaluation exercises). + We evaluated [CVE-2017-8083](https://nvd.nist.gov/vuln/detail/CVE-2017-8083) twice because our understanding and descriptions had changed materially after the first three CVEs (six evaluation exercises). After we were satisfied that the decision trees were clearly defined and captured our intentions, we began the formal evaluation of the draft trees, which we describe in the next section. The pilot study tested inter-rater agreement of decisions reached. Each author played the role of an analyst in both stakeholder groups (supplying and deploying) for nine vulnerabilities. We selected these nine vulnerabilities based on expert analysis, with the goal that the nine cases cover a useful series of vulnerabilities of interest. Specifically, we selected three vulnerabilities to represent safety-critical cases, three to represent regulated-systems cases, and three to represent general computing cases. @@ -24,9 +24,20 @@ However, we did standardize the set of evidence that was taken to be true for th Given this static information sheet, we did not synchronize an evaluation process for how to decide whether [*Exploitation*](../reference/decision_points/exploitation.md), for example, should take the value of [*none*](../reference/decision_points/exploitation.md), [*PoC*](../reference/decision_points/exploitation.md), or [*active*](../reference/decision_points/exploitation.md). We agreed on the descriptions of the decision points and the descriptions of their values. The goal of the pilot was to test the adequacy of those descriptions by evaluating whether the analysts agreed. We improved the decision point descriptions based on the results of the pilot; our changes are documented in [Improvement Instigated by the Pilot](#improvements-instigated-by-the-pilot). -In the design of the pilot, we held constant the information available about the vulnerability. This strategy restricted the analyst to decisions based on the framework given that information. That is, it controlled for differences in information search procedure among the analysts. The information search procedure should be controlled because this pilot was about the framework content, not how people answer questions based on that content. After the framework is more stable, a separate study should be devised that shows how analysts should answer the questions in the framework. The basis for the assessment that information search will be an important aspect in using the evaluation framework is based on our experience while developing the framework. During informal testing, often disagreements about a result involved a disagreement about what the scenario actually was; different information search methods and prior experiences led to different understandings of the scenario. This pilot methodology holds the information and scenario constant to test agreement about the descriptions themselves. This strategy makes constructing a prioritization system more tractable by separating problems in how people search for information from problems in how people make decisions. This paper focuses only on the structure of decision making. Advice about how to search for information about a vulnerability is a separate project that will be part of future work. In some domains, namely exploit availability, we have started that work in parallel. - -The structure of the pilot test is as follows. Table 11 provides an example of the information provided to each analyst. The supplier portfolio details use ~~strikeout font~~ because this decision item was removed after the pilot. The decision procedure for each case is as follows: for each analyst, for each vulnerability, for each stakeholder group, do the following. +In the design of the pilot, we held constant the information available about the vulnerability. +This strategy restricted the analyst to decisions based on the framework given that information. +That is, it controlled for differences in information search procedure among the analysts. +The information search procedure should be controlled because this pilot was about the framework content, not how people answer questions based on that content. +After the framework is more stable, a separate study should be devised that shows how analysts should answer the questions in the framework. +The basis for the assessment that information search will be an important aspect in using the evaluation framework is based on our experience while developing the framework. +During informal testing, often disagreements about a result involved a disagreement about what the scenario actually was; different information search methods and prior experiences led to different understandings of the scenario. +This pilot methodology holds the information and scenario constant to test agreement about the descriptions themselves. +This strategy makes constructing a prioritization system more tractable by separating problems in how people search for information from problems in how people make decisions. +This paper focuses only on the structure of decision making. +Advice about how to search for information about a vulnerability is a separate project that is left as future work. +In some domains, namely exploit availability, we have started that work in parallel. + +The structure of the pilot test is as follows. The next table provides an example of the information provided to each analyst. The supplier portfolio details use ~~strikeout font~~ because this decision item was removed after the pilot. The decision procedure for each case is as follows: for each analyst, for each vulnerability, for each stakeholder group, do the following. 1. Start at the root node of the relevant decision tree (deployer or supplier). @@ -37,27 +48,27 @@ The structure of the pilot test is as follows. Table 11 provides an example of t 4. Repeat this decision-and-evidence process until the analyst reaches a leaf node in the tree. Table: Example of Scenario Information Provided to Analysts (Using -CVE-2019-9042 as the Example) - -| Information Item | Description Provided to Analysts | -| :--- | :----------- | -| Use of Cyber-Physical System | Sitemagic’s content management system (CMS) seems to be fairly popular among smaller businesses because it starts out with a free plan to use it. Then it gradually has small increments in payment for additional features. Its ease-of-use, good designs, and one-stop-shopping for businesses attracts a fair number of clients. Like any CMS, it “manages the creation and modification of digital content. These systems typically support multiple users in a collaborative environment, allowing document management with different styles of governance and workflows. Usually the content is a website” \([Wikipedia](https://en.wikipedia.org/w/index.php?title=Content_management_system&oldid=913022120), 2019\) | -| State of Exploitation | Appears to be active exploitation of this vulnerability according to NVD. They have linked to the exploit: http://www.iwantacve.cn/index.php/archives/116/. | -| ~~Developer Portfolio Details~~ | ~~Sitemagic is an open-source project. The only thing the brand name applies to is the CMS, and it does not appear to be part of another open-source umbrella. The project is under active maintenance (i.e., it is not a dead project).~~ | -| Technical Impact of Exploit | An authenticated user can upload a .php file to execute arbitrary code with system privileges. | -| Scenario Blurb | We are a small business that uses Sitemagic to help run our business. Sitemagic handles everything from digital marketing and site design to facilitating the e-commerce transactions of the website. We rely on this website heavily, even though we do have a brick-and-mortar store. Many times, products are not available in-store, but are available online, so we point many customers to our online store. | -| Deployer Mission | We are a private company that must turn a profit to remain competitive. We want to provide customers with a valuable product at a reasonable price, while still turning a profit to run the business. As we are privately held (and not public), we are free to choose the best growth strategy (that is, we are not legally bound to demonstrate quarterly earnings for shareholders and can take a longer-term view if it makes us competitive). | -| Deployment of Affected System | We have deployed this system such that only the web designer Cheryl and the IT admin Sally are allowed to access the CMS as users. They login through a password-protected portal that can be accessed anywhere in the world for remote administration. The CMS publishes content to the web, and that web server and site are publicly available. | +[CVE-2019-9042](https://nvd.nist.gov/vuln/detail/CVE-2019-9042) as the Example) + +| Information Item | Description Provided to Analysts | +|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Use of Cyber-Physical System | Sitemagic’s content management system (CMS) seems to be fairly popular among smaller businesses because it starts out with a free plan to use it. Then it gradually has small increments in payment for additional features. Its ease-of-use, good designs, and one-stop-shopping for businesses attracts a fair number of clients. Like any CMS, it “manages the creation and modification of digital content. These systems typically support multiple users in a collaborative environment, allowing document management with different styles of governance and workflows. Usually the content is a website” ([Wikipedia](https://en.wikipedia.org/w/index.php?title=Content_management_system&oldid=913022120), 2019) | +| State of Exploitation | Appears to be active exploitation of this vulnerability according to NVD. They have linked to the exploit: http://www.iwantacve.cn/index.php/archives/116/. | +| ~~Developer Portfolio Details~~ | ~~Sitemagic is an open-source project. The only thing the brand name applies to is the CMS, and it does not appear to be part of another open-source umbrella. The project is under active maintenance (i.e., it is not a dead project).~~ | +| Technical Impact of Exploit | An authenticated user can upload a .php file to execute arbitrary code with system privileges. | +| Scenario Blurb | We are a small business that uses Sitemagic to help run our business. Sitemagic handles everything from digital marketing and site design to facilitating the e-commerce transactions of the website. We rely on this website heavily, even though we do have a brick-and-mortar store. Many times, products are not available in-store, but are available online, so we point many customers to our online store. | +| Deployer Mission | We are a private company that must turn a profit to remain competitive. We want to provide customers with a valuable product at a reasonable price, while still turning a profit to run the business. As we are privately held (and not public), we are free to choose the best growth strategy (that is, we are not legally bound to demonstrate quarterly earnings for shareholders and can take a longer-term view if it makes us competitive). | +| Deployment of Affected System | We have deployed this system such that only the web designer Cheryl and the IT admin Sally are allowed to access the CMS as users. They login through a password-protected portal that can be accessed anywhere in the world for remote administration. The CMS publishes content to the web, and that web server and site are publicly available. | This test structure produced a series of lists similar in form to the -contents of Table 12. Analysts also noted how much time they spent on +contents of the table below. Analysts also noted how much time they spent on each vulnerability in each stakeholder group. Table: Example Documentation of a Single Decision Point -| Decision Point | Branch Selected | Supporting Evidence | -| :--- | :--- | :------- | -| Exploitation=active | Controlled | The CMS has a limited number of authorized users, and the vulnerability is exploitable only by an authenticated user | +| Decision Point | Branch Selected | Supporting Evidence | +|:--------------------|:------------------|:---------------------------------------------------------------------------------------------------------------------| +| Exploitation=active | Controlled | The CMS has a limited number of authorized users, and the vulnerability is exploitable only by an authenticated user | We evaluated inter-rater agreement in two stages. In the first stage, each analyst independently documented their decisions. This stage produced 18 sets of decisions (nine vulnerabilities across each of two stakeholder groups) per analyst. In the second stage, we met to discuss decision points where at least one analyst differed from the others. If any analyst changed their decision, they appended the information and evidence they gained during this meeting in the “supporting evidence” value in their documentation. No changes to decisions were forced, and prior decisions were not erased, just amended. After the second stage, we calculated some statistical measures of inter-rater agreement to help guide the analysis of problem areas. @@ -78,69 +89,69 @@ Third, the pilot provides a proof of concept method and metric that any vulnerab ### Vulnerabilities used as examples -The vulnerabilities used as case studies are as follows. All quotes are from the [National Vulnerability Database (NVD)](https://nvd.nist.gov/) and are illustrative of the vulnerability; however, during the study each vulnerability was evaluated according to information analogous to that in Table 11. +The vulnerabilities used as case studies are as follows. All quotes are from the [National Vulnerability Database (NVD)](https://nvd.nist.gov/) and are illustrative of the vulnerability; however, during the study each vulnerability was evaluated according to information analogous to that in the scenario table above. ### Safety-Critical Cases - - CVE-2015-5374: “Vulnerability … in \[Siemens\] Firmware variant PROFINET IO for EN100 Ethernet module… Specially crafted packets sent to port 50000/UDP could cause a denial-of-service of the affected device…” + - [CVE-2015-5374](https://nvd.nist.gov/vuln/detail/CVE-2015-5374): “Vulnerability … in \[Siemens\] Firmware variant PROFINET IO for EN100 Ethernet module… Specially crafted packets sent to port 50000/UDP could cause a denial-of-service of the affected device…” - - CVE-2014-0751: “Directory traversal vulnerability in … GE Intelligent Platforms Proficy HMI/SCADA - CIMPLICITY before 8.2 SIM 24, and Proficy Process Systems with CIMPLICITY, allows remote attackers to execute arbitrary code via a crafted message to TCP port 10212, aka ZDI-CAN-1623.” + - [CVE-2014-0751](https://nvd.nist.gov/vuln/detail/CVE-2014-0751): “Directory traversal vulnerability in … GE Intelligent Platforms Proficy HMI/SCADA - CIMPLICITY before 8.2 SIM 24, and Proficy Process Systems with CIMPLICITY, allows remote attackers to execute arbitrary code via a crafted message to TCP port 10212, aka ZDI-CAN-1623.” - - CVE-2015-1014: “A successful exploit of these vulnerabilities requires the local user to load a crafted DLL file in the system directory on servers running Schneider Electric OFS v3.5 with version v7.40 of SCADA Expert Vijeo Citect/CitectSCADA, OFS v3.5 with version v7.30 of Vijeo Citect/CitectSCADA, and OFS v3.5 with version v7.20 of Vijeo Citect/CitectSCADA. If the application attempts to open that file, the application could crash or allow the attacker to execute arbitrary code.” + - [CVE-2015-1014](https://nvd.nist.gov/vuln/detail/CVE-2015-1014): “A successful exploit of these vulnerabilities requires the local user to load a crafted DLL file in the system directory on servers running Schneider Electric OFS v3.5 with version v7.40 of SCADA Expert Vijeo Citect/CitectSCADA, OFS v3.5 with version v7.30 of Vijeo Citect/CitectSCADA, and OFS v3.5 with version v7.20 of Vijeo Citect/CitectSCADA. If the application attempts to open that file, the application could crash or allow the attacker to execute arbitrary code.” ### Regulated Systems Cases - - CVE-2018-14781: “Medtronic insulin pump \[specific versions\] when paired with a remote controller and having the “easy bolus” and “remote bolus” options enabled (non-default), are vulnerable to a capture-replay attack. An attacker can … cause an insulin (bolus) delivery.” + - [CVE-2018-14781](https://nvd.nist.gov/vuln/detail/CVE-2018-14781): “Medtronic insulin pump \[specific versions\] when paired with a remote controller and having the “easy bolus” and “remote bolus” options enabled (non-default), are vulnerable to a capture-replay attack. An attacker can … cause an insulin (bolus) delivery.” - - CVE-2017-9590: “The State Bank of Waterloo Mobile … app 3.0.2 … for iOS does not verify X.509 certificates from SSL servers, which allows man-in-the-middle attackers to spoof servers and obtain sensitive information via a crafted certificate.” + - [CVE-2017-9590](https://nvd.nist.gov/vuln/detail/CVE-2017-9590): “The State Bank of Waterloo Mobile … app 3.0.2 … for iOS does not verify X.509 certificates from SSL servers, which allows man-in-the-middle attackers to spoof servers and obtain sensitive information via a crafted certificate.” - - CVE-2017-3183: “Sage XRT Treasury, version 3, fails to properly restrict database access to authorized users, which may enable any authenticated user to gain full access to privileged database functions. Sage XRT Treasury is a business finance management application. …” + - [CVE-2017-3183](https://nvd.nist.gov/vuln/detail/CVE-2017-3183): “Sage XRT Treasury, version 3, fails to properly restrict database access to authorized users, which may enable any authenticated user to gain full access to privileged database functions. Sage XRT Treasury is a business finance management application. …” ### General Computing Cases - - CVE-2019-2691: “Vulnerability in the MySQL Server component of Oracle MySQL (subcomponent: Server: Security: Roles). Supported versions that are affected are 8.0.15 and prior. Easily exploitable vulnerability allows high privileged attacker with network access via multiple protocols to … complete DoS of MySQL Server.” + - [CVE-2019-2691](https://nvd.nist.gov/vuln/detail/CVE-2019-2691): “Vulnerability in the MySQL Server component of Oracle MySQL (subcomponent: Server: Security: Roles). Supported versions that are affected are 8.0.15 and prior. Easily exploitable vulnerability allows high privileged attacker with network access via multiple protocols to … complete DoS of MySQL Server.” - - CVE-2019-9042: “\[I\]n Sitemagic CMS v4.4… the user can upload a .php file to execute arbitrary code, as demonstrated by 404.php. This can only occur if the administrator neglects to set FileExtensionFilter and there are untrusted user accounts. …” + - [CVE-2019-9042](https://nvd.nist.gov/vuln/detail/CVE-2019-9042): “\[I\]n Sitemagic CMS v4.4… the user can upload a .php file to execute arbitrary code, as demonstrated by 404.php. This can only occur if the administrator neglects to set FileExtensionFilter and there are untrusted user accounts. …” - - CVE-2017-5638: “The Jakarta Multipart parser in Apache Struts 2 2.3.x before 2.3.32 and 2.5.x before 2.5.10.1 has incorrect exception handling and error-message generation during file-upload attempts, which allows remote attackers to execute arbitrary commands via crafted \[specific headers\], as exploited in the wild in March 2017…” + - [CVE-2017-5638](https://nvd.nist.gov/vuln/detail/CVE-2017-5638): “The Jakarta Multipart parser in Apache Struts 2 2.3.x before 2.3.32 and 2.5.x before 2.5.10.1 has incorrect exception handling and error-message generation during file-upload attempts, which allows remote attackers to execute arbitrary commands via crafted \[specific headers\], as exploited in the wild in March 2017…” ## Pilot Results -For each of the nine CVEs, six analysts rated the priority of the vulnerability as both a supplier and deployer. Table 13 summarizes the results by reporting the inter-rater agreement for each decision point. For all measures, agreement (*k*) is above zero, which is generally interpreted as some agreement among analysts. Below zero is interpreted as noise or discord. Closer to 1 indicates more or stronger agreement. +For each of the nine CVEs, six analysts rated the priority of the vulnerability as both a supplier and deployer. The table below summarizes the results by reporting the inter-rater agreement for each decision point. For all measures, agreement (*k*) is above zero, which is generally interpreted as some agreement among analysts. Below zero is interpreted as noise or discord. Closer to 1 indicates more or stronger agreement. How close *k* should be to 1 before agreement can be considered strong enough or reliable enough is a matter of some debate. The value certainly depends on the number of options among which analysts select. For those decision points with five options (mission and safety impact), agreement is lowest. Although portfolio value has a higher *k* than mission or safety impact, it may not actually have higher agreement because portfolio value only has two options. The results for portfolio value are nearly indistinguishable as far as level of statistical agreement from mission impact and safety impact. The statistical community does not have hard and fast rules for cut lines on adequate agreement. We treat *k* as a descriptive statistic rather than a test statistic. -Table 13 is encouraging, though not conclusive. *k*\<0 is a strong sign of discordance. Although it is unclear how close to 1 is success, *k*\<0 would be clear sign of failure. In some ways, these results may be undercounting the agreement for SSVC as presented. These results are for SSVC prior to the improvements documented in [Improvement Instigated by the Pilot](#improvements-instigated-by-the-pilot), which are implemented in SSVC version 1. On the other hand, the participant demographics may inflate the inter-rater agreement based on shared tacit understanding through the process of authorship. The one participant who was not an author surfaced two places where this was the case, but we expect the organizational homogeneity of the participants has inflated the agreement somewhat. The anecdotal feedback from vulnerability managers at several organizations (including VMware [@akbar2020ssvc] and McAfee) is about refinement and tweaks, not gross disagreement. Therefore, while further refinement is necessary, this evidence suggests the results have some transferability to other organizations and are not a total artifact of the participant organization demographics. +The following table is encouraging, though not conclusive. *k*\<0 is a strong sign of discordance. Although it is unclear how close to 1 is success, *k*\<0 would be clear sign of failure. In some ways, these results may be undercounting the agreement for SSVC as presented. These results are for SSVC prior to the improvements documented in [Improvement Instigated by the Pilot](#improvements-instigated-by-the-pilot), which are implemented in SSVC version 1. On the other hand, the participant demographics may inflate the inter-rater agreement based on shared tacit understanding through the process of authorship. The one participant who was not an author surfaced two places where this was the case, but we expect the organizational homogeneity of the participants has inflated the agreement somewhat. The anecdotal feedback from vulnerability managers at several organizations (including VMware [@akbar2020ssvc] and McAfee) is about refinement and tweaks, not gross disagreement. Therefore, while further refinement is necessary, this evidence suggests the results have some transferability to other organizations and are not a total artifact of the participant organization demographics. Table: Inter-Rater Agreement for Decision Points -| | Safety Impact | Exploitation | Technical Impact | Portfolio Value | Mission Impact| Exposure | Dev Result | Deployer Result | -| :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | -Fleiss’ *k* | 0.122 | 0.807 | 0.679 | 0.257 | 0.146 | 0.480 | 0.226 | 0.295 | -|Disagreement range | 2,4 | 1,2 | 1,1 | 1,1 | 2,4 | 1,2 | 1,3 | 2,3 | +| | Safety Impact | Exploitation | Technical Impact | Portfolio Value | Mission Impact | Exposure | Dev Result | Deployer Result | +|:-------------------|--------------:|-------------:|------------------:|----------------:|---------------:|---------:|-----------:|----------------:| +| Fleiss’ *k* | 0.122 | 0.807 | 0.679 | 0.257 | 0.146 | 0.480 | 0.226 | 0.295 | +| Disagreement range | 2,4 | 1,2 | 1,1 | 1,1 | 2,4 | 1,2 | 1,3 | 2,3 | For all decision points, the presumed goal is for *k* to be close or equal to 1. The statistics literature has identified some limited cases in which Fleiss’ k behaves strangely—for example it is lower than expected when raters are split between 2 of q ratings when q\>2 [@falotico2015fleiss]. This paradox may apply to the safety and mission impact values, in particular. The paradox would bite hardest if the rating for each vulnerability was clustered on the same two values, for example, minor and major. Falotico and Quatto’s proposed solution is to permute the columns, which is safe with unordered categorical data. Since the nine vulnerabilities do not have the same answers as each other (that is, the answers are not clustered on the same two values), we happen to avoid the worst of this paradox, but the results for safety impact and mission impact should be interpreted with some care. -This solution identifies another difficulty of Fleiss’ kappa, namely that it does not preserve any order; none and catastrophic are considered the same level of disagreement as none and minor. Table 13 displays a sense of the range of disagreement to complement this weakness. This value is the largest distance between rater selections on a single vulnerability out of the maximum possible distance. So, for safety impact, the most two raters disagreed was by two steps (none to major, minor to hazardous, or major to catastrophic) out of the four possible steps (none to catastrophic). The only values of *k* that are reliably comparable are those with the same number of options (that is, the same maximum distance). In other cases, closer to 1 is better, but how close is close enough to be considered “good” changes. In all but one case, if raters differed by two steps then there were raters who selected the central option between them. The exception was mission impact for CVE-201814781; it is unclear whether this discrepancy should be localized to a poor test scenario description, or to SSVC’s mission impact definition. Given it is an isolated occurrence, we expect the scenario description at least partly. +This solution identifies another difficulty of Fleiss’ kappa, namely that it does not preserve any order; none and catastrophic are considered the same level of disagreement as none and minor. The table above displays a sense of the range of disagreement to complement this weakness. This value is the largest distance between rater selections on a single vulnerability out of the maximum possible distance. So, for safety impact, the most two raters disagreed was by two steps (none to major, minor to hazardous, or major to catastrophic) out of the four possible steps (none to catastrophic). The only values of *k* that are reliably comparable are those with the same number of options (that is, the same maximum distance). In other cases, closer to 1 is better, but how close is close enough to be considered “good” changes. In all but one case, if raters differed by two steps then there were raters who selected the central option between them. The exception was mission impact for CVE-201814781; it is unclear whether this discrepancy should be localized to a poor test scenario description, or to SSVC’s mission impact definition. Given it is an isolated occurrence, we expect the scenario description at least partly. Nonetheless, *k* provides some way to measure improvement on this a conceptual engineering task. The pilot evaluation can be repeated, with more diverse groups of stakeholders after the descriptions have been refined by stakeholder input, to measure fit to this goal. For a standard to be reliably applied across different analyst backgrounds, skill sets, and cultures, a set of decision point descriptions should ideally achieve *k* of 1 for each item in multiple studies with diverse participants. Such a high level of agreement would be difficult to achieve, but it would ensure that when two analysts assign a priority with the system that they get the same answer. Such agreement is not the norm with CVSS currently [@allodi2018effect]. Table: SSVC pilot scores compared with the CVSS base scores for the vulnerabilities provided by NVD. -| CVE-ID | Representative SSVC decision values | SSVC recommendation (supplier, deployer) | NVD’s CVSS base score | -| :---------------- | :--------------------------------: | :--------------------- | :---------------------- | -| CVE-2014-0751 | E:N/T:T/U:L/S:H/X:C/M:C | (Sched, OOC) | 7.5 (High) (v2) | -| CVE-2015-1014 | E:N/T:T/U:L/S:J/X:S/M:F | (Sched, Sched) | 7.3 (High) (v3.0) | -| CVE-2015-5374 | E:A/T:P/U:L/S:H/X:C/M:F | (Immed, Immed) | 7.8 (High) (v2) | -| CVE-2017-3183 | E:N/T:T/U:E/S:M/X:C/M:C | (Sched, Sched) | 8.8 (High) (v3.0) | -| CVE-2017-5638 | E:A/T:T/U:S/S:M/X:U/M:C | (Immed, OOC) | 10.0 (Critical) (v3.0) | -| CVE-2017-9590 | E:P/T:T/U:E/S:M/X:U/M:D | (OOC, Sched) | 5.9 (Medium) (v3.0) | -| CVE-2018-14781 | E:P/T:P/U:L/S:H/X:C/M:F | (OOC, OOC) | 5.3 (Medium) (v3.0) | -| CVE-2019-2691 | E:N/T:P/U:E/S:M/X:C/M:C | (Sched, Sched) | 4.9 (Medium) (v3.0) | -| CVE-2019-9042 | E:A/T:T/U:L/S:N/X:C/M:C | (OOC, Sched) | 7.2 (High) (v3.0) | +| CVE-ID | Representative SSVC decision values | SSVC recommendation (supplier, deployer) | NVD’s CVSS base score | +|:------------------------------------------------------------------|:------------------------------------:|:------------------------------------------|:-----------------------| +| [CVE-2014-0751](https://nvd.nist.gov/vuln/detail/CVE-2014-0751) | E:N/T:T/U:L/S:H/X:C/M:C | (Sched, OOC) | 7.5 (High) (v2) | +| [CVE-2015-1014](https://nvd.nist.gov/vuln/detail/CVE-2015-1014) | E:N/T:T/U:L/S:J/X:S/M:F | (Sched, Sched) | 7.3 (High) (v3.0) | +| [CVE-2015-5374](https://nvd.nist.gov/vuln/detail/CVE-2015-5374) | E:A/T:P/U:L/S:H/X:C/M:F | (Immed, Immed) | 7.8 (High) (v2) | +| [CVE-2017-3183](https://nvd.nist.gov/vuln/detail/CVE-2017-3183) | E:N/T:T/U:E/S:M/X:C/M:C | (Sched, Sched) | 8.8 (High) (v3.0) | +| [CVE-2017-5638](https://nvd.nist.gov/vuln/detail/CVE-2017-5638) | E:A/T:T/U:S/S:M/X:U/M:C | (Immed, OOC) | 10.0 (Critical) (v3.0) | +| [CVE-2017-9590](https://nvd.nist.gov/vuln/detail/CVE-2017-9590) | E:P/T:T/U:E/S:M/X:U/M:D | (OOC, Sched) | 5.9 (Medium) (v3.0) | +| [CVE-2018-14781](https://nvd.nist.gov/vuln/detail/CVE-2018-14781) | E:P/T:P/U:L/S:H/X:C/M:F | (OOC, OOC) | 5.3 (Medium) (v3.0) | +| [CVE-2019-2691](https://nvd.nist.gov/vuln/detail/CVE-2019-2691) | E:N/T:P/U:E/S:M/X:C/M:C | (Sched, Sched) | 4.9 (Medium) (v3.0) | +| [CVE-2019-9042](https://nvd.nist.gov/vuln/detail/CVE-2019-9042) | E:A/T:T/U:L/S:N/X:C/M:C | (OOC, Sched) | 7.2 (High) (v3.0) | -Table 14 presents the mode decision point value for each vulnerability tested, as well as the recommendation that would result from that set based on the recommended decision trees in SSVC version 1. The comparison with the NVD’s CVSS base scores mostly confirms that SSVC is prioritizing based on different criteria, as designed. In particular, differences in the state of exploitation and safety impact are suggestive. +The table above presents the mode decision point value for each vulnerability tested, as well as the recommendation that would result from that set based on the recommended decision trees in SSVC version 1. The comparison with the NVD’s CVSS base scores mostly confirms that SSVC is prioritizing based on different criteria, as designed. In particular, differences in the state of exploitation and safety impact are suggestive. Based on these results, we made about ten changes, some bigger than others. We did not execute a new rater agreement experiment with the updated descriptions. The pilot results are encouraging, and we believe it is time to open up a wider community discussion. @@ -172,4 +183,4 @@ Some of these points left marks on other decision points. The decision point “ Three of the above decision points left no trace on the current system. “State of legal or regulatory obligations,” “cost of developing remediation,” and “patch distribution readiness” were dropped as either being too vaguely defined, too high level, or otherwise not within the scope of decisions by the defined stakeholders. The remaining decision point, “adversary’s strategic benefit of exploiting the vulnerability,” transmuted to a different sense of system value. In an attempt to be more concrete and not speculate about adversary motives, we considered a different sense of value: supplier portfolio value. -The only decision point that we have removed following the pilot is developer portfolio value. This notion of value was essentially an over-correction to the flaws identified in the “adversary’s strategic benefit of exploiting the vulnerability” decision point. “Supplier portfolio value” was defined as “the value of the affected component as a part of the developer’s product portfolio. Value is some combination of importance of a given piece of software, number of deployed instances of the software, and how many people rely on each. The developer may also include lifecycle stage (early development, stable release, decommissioning, etc.) as an aspect of value.” It had two possible values: low and high. As Table 13 demonstrates, there was relatively little agreement among the six analysts about how to evaluate this decision point. We replaced this sense of portfolio value with *Utility*, which combines *Value Density* and *Automatability*. +The only decision point that we have removed following the pilot is developer portfolio value. This notion of value was essentially an over-correction to the flaws identified in the “adversary’s strategic benefit of exploiting the vulnerability” decision point. “Supplier portfolio value” was defined as “the value of the affected component as a part of the developer’s product portfolio. Value is some combination of importance of a given piece of software, number of deployed instances of the software, and how many people rely on each. The developer may also include lifecycle stage (early development, stable release, decommissioning, etc.) as an aspect of value.” It had two possible values: low and high. As the inter-rater reliability table demonstrates, there was relatively little agreement among the six analysts about how to evaluate this decision point. We replaced this sense of portfolio value with *Utility*, which combines *Value Density* and *Automatability*. diff --git a/docs/topics/future_work.md b/docs/topics/future_work.md index eae393dc..61588546 100644 --- a/docs/topics/future_work.md +++ b/docs/topics/future_work.md @@ -9,7 +9,12 @@ Plans for future work focus on further requirements gathering, analysis of types The community should know what users of a vulnerability prioritization system want. To explore their needs, it is important to understand how people actually use CVSS and what they think it tells them. -In general, such empirical, grounded evidence about what practitioners and decision makers want from vulnerability scoring is lacking. We have based this paper’s methodology on multiple decades of professional experience and myriad informal conversations with practitioners. Such evidence is not a bad place to start, but it does not lend itself to examination and validation by others. The purpose of understanding practitioner expectations is to inform what a vulnerability-prioritization methodology should actually provide by matching it to what people want or expect. The method this future work should take is long-form, structured interviews. We do not expect anyone to have access to enough consumers of CVSS to get statistically valid results out of a short survey, nor to pilot a long survey. +In general, such empirical, grounded evidence about what practitioners and decision makers want from vulnerability scoring is lacking. +We have based SSVC’s methodology on multiple decades of professional experience and myriad informal conversations with practitioners. +Such evidence is not a bad place to start, but it does not lend itself to examination and validation by others. +The purpose of understanding practitioner expectations is to inform what a vulnerability-prioritization methodology should actually provide by matching it to what people need or expect. +The method this future work should take is long-form, structured interviews. +We do not expect anyone to have access to enough consumers of CVSS to get statistically valid results out of a short survey, nor to pilot a long survey. Coordinators in particular are likely to be heterogeneous. While the FIRST service frameworks for PSIRTs and CSIRTs differentiate two broad classes of coordinators, we have focused on CSIRTs here. diff --git a/docs/topics/information_sources.md b/docs/topics/information_sources.md index 11719941..35958607 100644 --- a/docs/topics/information_sources.md +++ b/docs/topics/information_sources.md @@ -37,6 +37,8 @@ Three prominent examples are CVSS impact base metrics, CWE, and CPE. ### CVSS and Technical Impact +{% include-markdown "../_includes/_cvss4_question.md" heading-offset=1 %} + [*Technical Impact*](../reference/decision_points/technical_impact.md) is directly related to the CVSS impact metric group. However, this metric group cannot be directly mapped to [*Technical Impact*](../reference/decision_points/technical_impact.md) in CVSS version 3 because of the Scope metric. [*Technical Impact*](../reference/decision_points/technical_impact.md) is only about adversary control of the vulnerable component. @@ -44,7 +46,7 @@ If the CVSS version 3 value of “Scope” is “Changed,” then the impact met If confidentiality, integrity, and availability metrics are all “high” then [*Technical Impact*](../reference/decision_points/technical_impact.md) is [*total*](../reference/decision_points/technical_impact.md), as long as the impact metrics in CVSS are clearly about just the vulnerable component. However, the other values of the CVSS version 3 impact metrics cannot be mapped directly to [*partial*](../reference/decision_points/technical_impact.md) because of CVSS version 3.1 scoring guidance. Namely, “only the increase in access, privileges gained, or other negative outcome as a result of successful exploitation should be considered” [@cvss_v3-1]. -The example given is that if an attacker already has read access, but gains all other access through the exploit, then read access didn't change and the confidentiality metric score should be “None” . +The example given is that if an attacker already has read access, but gains all other access through the exploit, then read access didn't change and the confidentiality metric score should be “None”. However, in this case, SSVC would expect the decision point to be evaluated as [*total*](../reference/decision_points/technical_impact.md) because as a result of the exploit the attacker gains total control of the device, even though they started with partial control. ### CWE and Exploitation @@ -74,7 +76,7 @@ Some sources, such as CWE or existing asset management solutions, would require ### Automatable and Value Density The SSVC decision point that we have not identified an information source for is [Utility](../reference/decision_points/utility.md). -[Utility](../reference/decision_points/utility.md) is composed of [*Automatable*](../reference/decision_points/automatable.md) and [*Value Density*](../reference/decision_points/value_density.md), so the question is what a sort of feed could support each of those decision points. +[Utility](../reference/decision_points/utility.md) is composed of [*Automatable*](../reference/decision_points/automatable.md) and [*Value Density*](../reference/decision_points/value_density.md), so the question is what sort of feed could support each of those decision points. A feed is plausible for both of these decision points. The values for [*Automatable*](../reference/decision_points/automatable.md) and [*Value Density*](../reference/decision_points/value_density.md) are both about the relationship between a vulnerability, the attacker community, and the aggregate state of systems connected to the Internet. @@ -82,6 +84,12 @@ While that is a broad analysis frame, it means that any community that shares a An organization in the People's Republic of China may have a different view than an organization in the United States, but most organizations within each region should should have close enough to the same view to share values for [*Automatable*](../reference/decision_points/automatable.md) and [*Value Density*](../reference/decision_points/value_density.md). These factors suggest a market for an information feed about these decision points is a viable possibility. +!!! note inline end "CVSS v4, Automatable, and Value Density" + + It is not coincidental that the CVSS v4 supplemental metrics include [Automatable](https://www.first.org/cvss/v4.0/specification-document#Automatable-AU) + (AU) and [Value Density](https://www.first.org/cvss/v4.0/specification-document#Value-Density-V) (V). + The SSVC team collaborated in the development of these metrics with the [FIRST CVSS Special Interest Group](https://www.first.org/cvss). + At this point, it is not clear that an algorithm or search process could be designed to automate scoring [*Automatable*](../reference/decision_points/automatable.md) and [*Value Density*](../reference/decision_points/value_density.md). It would be a complex natural language processing task. Perhaps a machine learning system could be designed to suggest values. diff --git a/docs/topics/items_with_same_priority.md b/docs/topics/items_with_same_priority.md index 62fbee1b..87641842 100644 --- a/docs/topics/items_with_same_priority.md +++ b/docs/topics/items_with_same_priority.md @@ -8,9 +8,11 @@ The priority is equivalent. !!! tip "This is not CVSS" This approach may feel uncomfortable since CVSS gives the appearance of a finer grained priority. - CVSS appears to say, + CVSS appears to say, + > Not just 4.0 to 6.9 is ‘medium’ severity, but 4.6 is more severe than 4.5. - However, CVSS is designed to be accurate only within +/- 0.5, + + However, CVSS v3.1 is designed to be accurate only within +/- 0.5, and, in practice, is scored with errors of around +/- 1.5 to 2.5 [@allodi2018effect, see Figure 1]. An error of this magnitude is enough to make all of the “normal” range from 4.0 to 6.9 equivalent, because diff --git a/docs/topics/related_systems.md b/docs/topics/related_systems.md index c16deaf8..7bcef98f 100644 --- a/docs/topics/related_systems.md +++ b/docs/topics/related_systems.md @@ -9,6 +9,8 @@ This section discusses the relationship between these various systems and SSVC. ## CVSS +{% include-markdown "../_includes/_cvss4_question.md" heading-offset=1 %} + CVSS version 3.1 has three metric groups: base, environmental, and temporal. The metrics in the base group are all required, and are the only required metrics. In connection with this design, CVSS base scores and base metrics are far and away the most commonly used and communicated. @@ -26,7 +28,7 @@ In these three examples, the modifications tend to add complexity to CVSS by add Product vendors have varying degrees of adaptation of CVSS for development prioritization, including but not limited to [Red Hat](https://access.redhat.com/security/updates/classification), [Microsoft](https://www.microsoft.com/en-us/msrc/security-update-severity-rating-system), and [Cisco](https://tools.cisco.com/security/center/resources/security_vulnerability_policy.html). The vendors codify CVSS’s recommended qualitative severity rankings in different ways, and Red Hat and Microsoft make the user interaction base metric more important. -> Exploitability metrics (Base metric group) +### Exploitability metrics (Base metric group) The four metrics in this group are Attack Vector, Attack Complexity, Privileges Required, and User Interaction. This considerations may likely be involved in the [Automatability](../reference/decision_points/automatable.md) decision point. @@ -46,7 +48,7 @@ Most notably the concept of vulnerability chaining is addressed in [Automatabili A vulnerability is evaluated based on an observable outcome of whether the first four steps of the kill chain can be automated for it. A proof of automation in a relevant environment is an objective evaluation of the score in a way that cannot be provided for some CVSS elements, such as Attack Complexity. -> Impact metrics (Base metric group) +### Impact metrics (Base metric group) The metrics in this group are Confidentiality, Integrity, and Availability. There is also a loosely associated Scope metric. @@ -60,13 +62,13 @@ The impact of exploitation of the vulnerable component on other components is co CVSS addresses some definitions of the scope of CVSS as a whole under the Scope metric definition. In SSVC, these definitions are in the [Scope](scope.md) section. -> Temporal metric groups +### Temporal metric groups The temporal metric group primarily contains the Exploit Code Maturity metric. This metric expresses a concept similar to [*Exploitation*](../reference/decision_points/exploitation.md). The main difference is that [*Exploitation*](../reference/decision_points/exploitation.md) is not optional in SSVC and that SSVC accounts for the observation that most vulnerabilities with CVE-IDs do not have public exploit code [@householder2020historical] and are not actively exploited [@guido2011exploit,@jacobs2021epss]. -> Environmental metric group +### Environmental metric group The environmental metric group allows a consumer of a CVSS base score to change it based on their environment. CVSS needs this functionality because the organizations that produce CVSS scores tend to be what SSVC calls **suppliers** and consumers of CVSS scores are what SSVC calls **deployers**. diff --git a/docs/topics/worked_example.md b/docs/topics/worked_example.md index 4ecb95d7..857a1892 100644 --- a/docs/topics/worked_example.md +++ b/docs/topics/worked_example.md @@ -1,7 +1,7 @@ # Worked Example -As an example, we will evaluate CVE-2018-14781 step by step from the deployer point of view. +As an example, we will evaluate [CVE-2018-14781](https://nvd.nist.gov/vuln/detail/CVE-2018-14781) step by step from the deployer point of view. The scenario here is that used for the pilot study. This example uses the SSVC version 1 deployer decision tree. @@ -43,9 +43,11 @@ use its installation to remotely identify targets. However, since most of the hospital’s clients have not installed the app, and for nearly all cases, physical proximity to the device is necessary; therefore, we select [*small*](../reference/decision_points/system_exposure.md) and move on to ask about mission impact. -According to the fictional pilot scenario, “Our mission dictates that the first and foremost priority is to contribute -to human welfare and to uphold the Hippocratic oath (do no harm).” The continuity of operations planning for a hospital -is complex, with many MEFs. +According to the fictional pilot scenario, + +> Our mission dictates that the first and foremost priority is to contribute to human welfare and to uphold the Hippocratic oath (do no harm). + +The continuity of operations planning for a hospital is complex, with many Mission Essential Functions (MEFs). However, even from this abstract, it seems clear that “do no harm” is at risk due to this vulnerability. A mission essential function to that mission is each of the various medical devices works as expected, or at least if a device fails, it cannot actively be used to inflict harm. @@ -58,7 +60,7 @@ Therefore, we select [*MEF failure*](../reference/decision_points/mission_impact This particular pilot study used SSVC version 1. In the suggested deployer tree for SSVC version 2.1, mission and safety impact would be used to calculate the overall [*Human Impact*](../reference/decision_points/human_impact.md), and [*Automatable*](../reference/decision_points/automatable.md) would need to be answered as well. -Conducting further studies with the recommended version 2.1 Deployer tree remains an area of future work. +Conducting further studies with the more recent versions of the Deployer decision model remains an area of future work. In the pilot study, this information is conveyed as follows: !!! info "Use of the cyber-physical system" diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md index de1f96eb..ba3c6f09 100644 --- a/docs/tutorials/index.md +++ b/docs/tutorials/index.md @@ -1,6 +1,23 @@ # Learning SSVC -{== todo add intro ==} +SSVC stands for Stakeholder-Specific Vulnerability Categorization. +It is a methodology for prioritizing vulnerabilities based on the needs of the stakeholders involved in the vulnerability management process. +SSVC is designed to be used by any stakeholder in the vulnerability management process, including patch suppliers, patch deployers, coordinators, and others. +One of SSVC's key features is that it is intended to be customized to the needs of the organization using it. +In the [HowTo](../howto/index.md) section, we provide a set of decision models that can be used as a starting point, +but we expect that organizations will need to modify these models to fit their specific needs. +An introduction to how we think about SSVC can be found in the [Understanding SSVC](../topics/index.md) section. +For technical reference, including a list of decision points, see [Reference](../reference/index.md). + +!!! info "SSVC in a Nutshell" + + SSVC is built around the concept of a **Decision Model** that takes a set of input **Decision Points** and + applies a **Policy** to produce a set of output **Outcomes**. + The **Decision Points** are the factors that influence the decision, and the **Outcomes** are the possible results of the decision. + Both **Decision Points** and **Outcomes** are defined as ordered sets of enumerated values. + The **Policy** is a mapping from each combination of decision point values to the set of outcome values. + One of SSVC's goals is to provide a methodology to develop risk-informed guidance at a human scale, while enabling + data-driven decision-making. !!! tip "SSVC Calculator" @@ -8,8 +25,32 @@ The decisions modeled in the calculator are based on the [Supplier](../howto/supplier_tree.md), [Deployer](../howto/deployer_tree.md), and [Coordinator](../howto/coordination_intro.md) decision models. +SSVC can be used in conjunction with other tools and methodologies to help prioritize vulnerability response. + +!!! example "CVSS and SSVC" + + The Common Vulnerability Scoring System (CVSS) is a free and open industry standard for assessing the severity of + software security vulnerabilities. + CVSS assigns technical severity scores to vulnerabilities, and many organizations use this score to inform their + vulnerability management process. + In SSVC, we took a different approach with our stakeholder-specific model, although the information contained in a + CVSS vector can be applied to SSVC decision models. + For example, the [Technical Impact](../reference/decision_points/technical_impact.md) decision point in + the [Supplier](../howto/supplier_tree.md) decision model can be informed by the CVSS vector. + +!!! example "EPSS and SSVC" + + The Exploit Prediction Scoring System (EPSS) provides information regarding the likelihood of a vulnerability being exploited in the wild. + This information can be used to inform the [Exploitation](../reference/decision_points/exploitation.md) decision point in the + [Supplier](../howto/supplier_tree.md), [Deployer](../howto/deployer_tree.md), and [Coordinator Publication](../howto/publication_decision.md) decision models. + + + + ## Videos +Provided below are videos that provide an overview of SSVC and the implementation of decision models. + | Source | Video | | ------ |----------------------------------------------------------------------------------------------------------------------------------| | SEI Podcast Series | [A Stakeholder-Specific Approach to Vulnerability Management](https://youtu.be/wbUTizBaXA0) | @@ -23,6 +64,8 @@ ## Other Content +We've collected a list of articles and blog posts that provide additional information about SSVC. + | Source | Link | |- -------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | SEI | [Prioritizing Vulnerability Response with a Stakeholder-Specific Vulnerability Categorization](https://insights.sei.cmu.edu/blog/prioritizing-vulnerability-response-with-a-stakeholder-specific-vulnerability-categorization/) |