Skip to content

Commit 0ab672e

Browse files
committed
Add dimensions.md
1 parent 64be7f5 commit 0ab672e

1 file changed

Lines changed: 272 additions & 0 deletions

File tree

Lines changed: 272 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,272 @@
1+
# Dimensions
2+
3+
This section describes the various dimensions
4+
and the corresponding sub dimension.
5+
6+
The descriptions are highly based (mostly copied)
7+
on the [OWASP Project Integration Project Writeup](https://github.com/OWASP/www-project-integration-standards/blob/master/writeups/owasp_in_sdlc/index.md).
8+
9+
## Implementation
10+
11+
This dimension covers topic of "traditional"
12+
hardening of software and infrastructure components.
13+
14+
There is an abundance of libraries and frameworks implementing
15+
secure defaults.
16+
For frontend development, [ReactJS](https://reactjs.org/) seems to be
17+
the latest favourite in the Javascript world.
18+
19+
On the database side, there are [ORM](https://sequelize.org/) libraries
20+
and [Query Builders](https://github.com/kayak/pypika) for most languages.
21+
22+
If you write in Java,
23+
the [ESAPI project](https://www.javadoc.io/doc/org.owasp.esapi/esapi/latest/index.html)
24+
offers several methods to securely implement features,
25+
ranging from Cryptography to input escaping and output encoding.
26+
27+
**Example low maturity scenario:**
28+
29+
The API was queryable by anyone and GraphQL introspection was enabled since
30+
all components were left in debug configuration.
31+
32+
Sensitive API paths were not whitelisted.
33+
The team found that the application was attacked when the server showed very
34+
high CPU load.
35+
The response was to bring the system down, very little information about
36+
the attack was found apart from the fact that someone
37+
was mining cryptocurrencies on the server.
38+
39+
**Example Low Maturity Scenario:**
40+
41+
The team attempted to build the requested features using vanilla NodeJS,
42+
connectivity to backend systems is validated by firing an internal request
43+
to `/healthcheck?remoteHost=<xx.xx.xx>` which attempts to run a ping
44+
command against the IP specified.
45+
All secrets are hard coded.
46+
The team uses off the shelf GraphQL libraries but versions
47+
are not checked using [NPM Audit](https://docs.npmjs.com/cli/audit).
48+
Development is performed by pushing to master which triggers a webhook that
49+
uses FTP to copy latest master to the development server which will become production once development is finished.
50+
51+
**Example High Maturity Scenario:**
52+
53+
Team members have access to comprehensive documentation
54+
and a library of code snippets they can use to accelerate development.
55+
56+
Linters are bundled with pre-commit hooks
57+
and no code reaches master without peer review.
58+
59+
Pre-merge tests are executed before merging code into master.
60+
Tests run a comprehensive suite of tests covering unit tests,
61+
service acceptance tests,
62+
unit tests as well as regression tests.
63+
64+
Once a day a pipeline of specially configured
65+
static code analysis tools runs against
66+
the features merged that day, the results are
67+
triaged by a trained security team and fed to engineering.
68+
69+
There is a cronjob executing Dynamic Analysis tools against Staging
70+
with a similar process.
71+
72+
Pentests are conducted against features released on every release
73+
and also periodically against the whole software stack.
74+
75+
# Culture and Organization
76+
77+
This section covers topics related to culture and organization like
78+
processes, education and the design phase.
79+
80+
Once requirements are gathered and analysis is performed,
81+
implementation specifics need to be defined.
82+
The outcome of this stage is usually a diagram outlining data flows
83+
and a general system architecture.
84+
This presents an opportunity for both threat modeling
85+
and attaching security considerations
86+
to every ticket and epic that is the outcome of this stage.
87+
88+
### Design
89+
90+
There is some great advice on threat modeling out there
91+
*e.g.* [this](https://arstechnica.com/information-technology/2017/07/how-i-learned-to-stop-worrying-mostly-and-love-my-threat-model/)
92+
article or [this](https://www.microsoft.com/en-us/securityengineering/sdl/threatmodeling) one.
93+
94+
A bite sized primer by Adam Shostack himself can be found
95+
[here](https://adam.shostack.org/blog/2018/03/threat-modeling-panel-at-appsec-cali-2018/).
96+
97+
OWASP includes a short [article](https://wiki.owasp.org/index.php/Category:Threat_Modeling)
98+
on Threat Modeling along with a relevant [Cheatsheet](https://cheatsheetseries.owasp.org/cheatsheets/Threat_Modeling_Cheat_Sheet.html).
99+
Moreover, if you're following OWASP SAMM, it has a short section on [Threat Assessment](https://owaspsamm.org/model/design/threat-assessment/).
100+
101+
There's a few projects that can help with creating Threat Models
102+
at this stage, [PyTM](https://github.com/izar/pytm) is one,
103+
[ThreatSpec](https://github.com/threatspec/threatspec) is another.
104+
105+
> Note: _A threat model can be as simple as a data flow diagram with attack vectors on every flow and asset and equivalent remediations.
106+
An example can be found below._
107+
108+
![Threat Model](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/threat_model.png "Threat Model")
109+
110+
Last, if the organisation maps Features to Epics, the Security Knowledge Framework (SKF) can be used to facilitate this process by leveraging it's questionnaire function.
111+
112+
![SKF](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/skf_qs.png "SKF")
113+
114+
This practice has the side effect that it trains non-security specialists to think like attackers.
115+
116+
The outcomes of this stage should help lay the foundation of secure design and considerations.
117+
118+
**Example Low Maturity Scenario:**
119+
120+
Following vague feature requirements the design includes caching data to a local unencrypted database with a hardcoded password.
121+
122+
Remote data store access secrets are hardcoded in the configuration files.
123+
All communication between backend systems is plaintext.
124+
125+
Frontend serves data over GraphQL as a thin layer between caching system and end user.
126+
127+
GraphQL queries are dynamically translated to SQL, Elasticsearch and NoSQL queries.
128+
Access to data is protected with basic auth set to _1234:1234_ for development purposes.
129+
130+
**Example High Maturity Scenario:**
131+
132+
Based on a detailed threat model defined and updated through code, the team decides the following:
133+
134+
* Local encrypted caches need to expire and auto-purged.
135+
* Communication channels encrypted and authenticated.
136+
* All secrets persisted in shared secrets store.
137+
* Frontend designed with permissions model integration.
138+
* Permissions matrix defined.
139+
* Input is escaped output is encoded appropriately using well established libraries.
140+
141+
### Education and Guidence
142+
143+
Metrics won't necessarily improve without training engineering teams and somehow building a security-minded culture.
144+
Security training is a long and complicated discussion.
145+
There is a variety of approaches out there, on the testing-only end of the spectrum there is fully black box virtual machines such as [DVWA](http://www.dvwa.co.uk/), [Metasploitable series](https://metasploit.help.rapid7.com/docs/metasploitable-2) and the [VulnHub](https://www.vulnhub.com/) project.
146+
147+
The code & remediation end of the spectrum isn't as well-developed,
148+
mainly due to the complexity involved in building and distributing such material.
149+
However, there are some respectable solutions, [Remediate The Flag](https://www.remediatetheflag.com/)
150+
can be used to setup a code based challenge.
151+
152+
![Remediate the Flag](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/rtf.png "Remediate the Flag")
153+
154+
However, if questionnaires are the preferred medium, or if the organisation
155+
is looking for self-service testing, [Secure Coding Dojo](https://github.com/trendmicro/SecureCodingDojo) is an interesting solution.
156+
157+
More on the self-service side, the Security Knowledge Framework has released
158+
several [Labs](https://owasp-skf.gitbook.io/asvs-write-ups/) that each
159+
showcase one vulnerability and provides information on how to exploit it.
160+
161+
However, to our knowledge, the most flexible project out there is probably
162+
the [Juice Shop](https://github.com/bkimminich/juice-shop), deployed
163+
on Heroku with one click, it offers both CTF functionality and a self-service
164+
standalone application that comes with solution detection
165+
and a comprehensive progress-board.
166+
167+
![Juice Shop](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/juiceshop.png "Juice Shop")
168+
169+
### Process
170+
171+
**Example High Maturity Scenario:**
172+
173+
Business continuity and Security teams run incident management drills
174+
periodically to refresh incident playbook knowledge.
175+
176+
# Test and Verification
177+
178+
At any maturity level, linters can be introduced to ensure that consistent
179+
code is being added.
180+
For most linters, there are IDE integrations providing software engineers
181+
with the ability to validate code correctness during development time.
182+
Several linters also include security specific rules.
183+
This allows for basic security checks before the code is even committed.
184+
For example, if you write in Typescript, you can use
185+
[tslint](https://github.com/palantir/tslint) along
186+
with [tslint-config-security](https://www.npmjs.com/package/tslint-config-security)
187+
to easily and quickly perform basic checks.
188+
189+
However, linters cannot detect vulnerabilities in third party libraries,
190+
and as software supply chain attacks spread, this consideration becomes more important.
191+
To track third party library usage and audit their security you can use [Dependency Check/Track](https://dependencytrack.org/).
192+
193+
![SKF Code](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/skf_code.png "SKF Code")
194+
195+
This stage can be used to validate software correctness and it's results as a
196+
metric for the security related decisions of the previous stages.
197+
At this stage both automated and manual testing can be performed.
198+
SAMM again offers 3 maturity levels across Architecture Reviews, Requirements testing, and Security Testing.
199+
Instructions can be found [here](https://owaspsamm.org/model/verification/) and a screenshot is listed below.
200+
201+
![SAMM Testing](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/samm_testing.png "SAMM Testing")
202+
203+
Testing can be performed several ways and it highly depends on the nature
204+
of the software, the organisation's cadence, and the regulatory requirements among other things.
205+
206+
If available, automation is a good idea as it allows detection of easy to find vulnerabilities without much human interaction.
207+
208+
If the application communicates using a web-based protocol, the [ZAP](https://github.com/zaproxy/zaproxy) project can be used to automate a great number of web related attacks and detection.
209+
ZAP can be orchestrated using its REST API and it can even automate multi-stage attacks by leveraging its Zest scripting support.
210+
211+
Vulnerabilities from ZAP and a wide variety of other tools can be imported and managed using a dedicated defect management platform such as [Defect Dojo](https://github.com/DefectDojo/django-DefectDojo)(screenshot below).
212+
213+
![Defect Dojo](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/defectdojo.png "Defect Dojo")
214+
215+
For manual testing the [Web](https://github.com/OWASP/wstg) and [Mobile](https://github.com/OWASP/owasp-mstg) Security Testing Guides can be used to achieve a base level of quality for human driven testing.
216+
217+
**Example Low Maturity Scenario:**
218+
219+
The business deployed the system to production without testing.
220+
Soon after, the client's routine pentests uncovered deep flaws with access to backend data and services.
221+
The remediation effort was significant.
222+
223+
**Example High Maturity Scenario:**
224+
225+
The application features received Dynamic Automated testing when each reached staging, a trained QA team validated business requirements that involved security checks.
226+
A security team performed an adequate pentest and gave a sign-off.
227+
228+
# Build and Deployment
229+
230+
Secure configuration standards can be enforced during the deployment using the [Open Policy Agent](https://www.openpolicyagent.org/).
231+
232+
![SAMM Release](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/samm_release.png "SAMM Release")
233+
234+
**Example Low Maturity scenario:**
235+
236+
_please create a PR_
237+
238+
**Example High Maturity scenario:**
239+
240+
The CI/CD system, when migrating successful QA environments to production, applies appropriate configuration to all components.
241+
Configuration is tested periodically for drift.
242+
243+
Secrets live in-memory only and are persisted in a dedicated Secrets Storage solution such as Hashicorp Vault.
244+
245+
## Information Gathering
246+
247+
Concerning metrics, the community has been quite vocal on what to measure
248+
and how important it is.
249+
The OWASP CISO guide offers 3 broad categories of SDLC metrics[1] which can
250+
be used to measure effectiveness of security practices.
251+
Moreover, there is a number of presentations on what could be leveraged
252+
to improve a security programme, starting from Marcus' Ranum's [keynote](https://www.youtube.com/watch?v=yW7kSVwucSk)
253+
at Appsec California[1],
254+
Caroline Wong's similar [presentation](https://www.youtube.com/watch?v=dY8IuQ8rUd4)
255+
and [this presentation](https://www.youtube.com/watch?v=-XI2DL2Uulo) by J. Rose and R. Sulatycki.
256+
These among several writeups by private companies all offering their own version of what could be measured.
257+
258+
Projects such as the [ELK stack](https://www.elastic.co/elastic-stack), [Grafana](https://grafana.com/)
259+
and [Prometheus](https://prometheus.io/docs/introduction/overview/) can be used to aggregate
260+
logging and provide observability.
261+
262+
However, no matter the WAFs, Logging, and secure configuration enforced
263+
at this stage, incidents will occur eventually.
264+
Incident management is a complicated and high stress process.
265+
To prepare organisations for this, SAMM includes a section on [incident management](https://owaspsamm.org/model/operations/incident-management/) involving simple questions for stakeholders to answer so you can determine incident preparedness accurately.
266+
267+
**Example High Maturity scenario:**
268+
269+
Logging from all components gets aggregated in dashboards and alerts
270+
are raised based on several Thresholds and events.
271+
There are canary values and events fired against monitoring
272+
from time to time to validate it works.

0 commit comments

Comments
 (0)