What Is the Purpose of Writing and Reviewing Accurate and Complete Incident Reports
Documentation Review
Evidence of Assessment
Leighton Johnson , in Security Controls Evaluation, Testing, and Assessment Handbook, 2016
Documentation
Documentation consists of the organisation's business documents used to support security and accounting events. The strength of documentation is that it is prevalent and bachelor at a low cost. Documents can be internal or externally generated. Internal documents provide less reliable testify than external ones, particularly if the client'south internal command is suspect. Documents that are external and have been prepared by qualified individuals such as attorneys or insurance brokers provide additional reliability. The apply of documentation in back up of a client'due south transactions is chosen vouching.
Documentation review criteria include three areas of focus:
- i.
-
Review is used for the "generalized" level of rigor, that is, a high-level exam looking for required content and for any obvious errors, omissions, or inconsistencies.
- 2.
-
Written report is used for the "focused" level of rigor, that is, an examination that includes the intent of "review" and adds a more in-depth test for greater testify to support a determination of whether the document has the required content and is gratis of errors, omissions, and inconsistencies.
- 3.
-
Clarify is used for the "detailed" level of rigor, that is, an exam that includes the intent of both "review" and "study," calculation a thorough and detailed analysis for significant grounds for confidence in the determination of whether required content is present and the document is correct, complete, and consistent.
Read full affiliate
URL:
https://www.sciencedirect.com/science/commodity/pii/B9780128023242000129
Evidence of assessment
Leighton Johnson , in Security Controls Evaluation, Testing, and Cess Handbook (Second Edition), 2020
Documentation
Documentation consists of the arrangement'due south business documents used to support security and accounting events. The strength of documentation is that it is prevalent and available at a low cost. Documents can be internally or externally generated. Internal documents provide less reliable show than external ones, particularly if the client's internal control is suspect. Documents that are external and have been prepared by qualified individuals such as attorneys or insurance brokers provide additional reliability. The apply of documentation in support of a client'southward transactions is chosen vouching.
Documentation Review Criteria include three areas of focus:
- (a)
-
Review is used for the "generalized" level of rigor; that is, a loftier-level examination looking for required content and for any obvious errors, omissions, or inconsistencies.
- (b)
-
Report is used for the "focused" level of rigor; that is, an examination that includes the intent of "review" and adds a more than in-depth examination for greater evidence to support a determination of whether the document has the required content and is costless of errors, omissions, and inconsistencies.
- (c)
-
Analyze is used for the "detailed" level of rigor; that is, an examination that includes the intent of both "review" and "study"; adding a thorough and detailed assay for significant grounds for confidence in the determination of whether required content is present and the document is correct, complete, and consistent.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128184271000148
Information Quality Assessment
David Loshin , in The Practitioner'due south Guide to Information Quality Improvement, 2011
eleven.2.4 Document Information Production Flow
A documentation review is intended to determine the flow of information across the business process, and map how the data from the raw information sources is transformed into the ultimate information product. Constructing an data production menstruum diagram is valuable for many reasons:
- •
-
It provides an information-centric view of the ways that business organization processes are executed.
- •
-
It focuses on multiple uses of data and data across information arrangement and business process boundaries.
- •
-
It reduces the concentration on functional requirements in deference to enterprise data and information requirements across the organizations.
- •
-
It documents the way that data flows across business processes and can exist used to place the best locations for inspecting data quality and identifying flaws before whatsoever business impacts tin occur.
The data production menses notes how data flows through these stages inside an awarding:
- one.
-
Data sources that are used past the business organisation procedure
- 2.
-
Processing stage, noting whatsoever processing performed on the data
- 3.
-
Storage locations, listing the data elements that are stored and the organisation where the data is stored
- 4.
-
Validation points, where there are checks for information quality criteria, and for each location, the list of data quality validations performed at that point
- 5.
-
Conclusion points at which the processing stream is directed based on the outcome of evaluating different conditions
Any data handoffs between processing stages, application boundaries, or arrangement boundaries are besides noted.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123737175000117
Assessment methods
Leighton Johnson , in Security Controls Evaluation, Testing, and Assessment Handbook (Second Edition), 2020
Certificate reviews
Documentation review determines if the technical aspects of policies and procedures are current and comprehensive. These documents provide the foundation for an organization'due south security posture only are often overlooked during technical assessments. Security groups inside the organisation should provide assessors with appropriate documentation to ensure a comprehensive review. Documents are reviewed for technical accuracy and abyss will include security policies, architectures, and requirements; standard operating procedures; system security plans and authority agreements; memoranda of agreement and agreement for system interconnections; and incident response plans.
Documentation review can discover gaps and weaknesses that could lead to missing or improperly implemented security controls. Assessors typically verify that the organization's documentation is compliant with standards and regulations such equally FISMA and await for policies that are deficient or outdated. Common documentation weaknesses include Os security procedures or protocols that are no longer used, and failure to include a new Bone and its protocols. Documentation review does non ensure that security controls are implemented properly—only that the direction and guidance exist to back up security infrastructure.
Results of documentation review tin can be used to fine-tune other testing and examination techniques. For case, if a password management policy has specific requirements for minimum password length and complication, this information can be used to configure countersign-groovy tools for more efficient performance.
- i.
-
"GAP" Analysis Process: The initial document reviews which commencement off the cess procedure often provide data to the assessor and the assessment team to use in providing focus and pinpoint areas of potential concern to review, examine, evaluate and test the system or application under review. These document reviews allow the assessment squad to place recently repaired controls, areas of volatility in controls and protection, and potentially areas overlooked or reduced in strength of command protection. This leads to a "Gap Analysis" which can determine what requirements, operational criteria, security objectives and compliance needs are and are not beingness met by the system under review. This "Gap Assay" process often has been used to discover areas of weakness in policies, procedures and reporting for systems and applications. The "Gap Analysis" process which I often used is described equally follows:
- ii.
-
Review each authorization parcel using the 15-step methodology outlined below. Using this defined process for review promotes consistency and quality of package analysis.
- 1.
-
Review current documents for completeness and accuracy, based on the established security baseline.
- 2.
-
Review current documents for System Security Classification and level determination.
- 3.
-
Catalog current documents into security areas.
- 4.
-
Develop mapping for current documents to FISMA, DOD IA Regulations (if applicable), NIST guidance, US Governmental Bureau Regulations (if applicative), and FIPS Standards (DODI 8510.01, SP800-37, SP800-53, SP800-53A, FIPS-199, etc.).
- v.
-
Review current documents for mapping status.
- vi.
-
Identify preliminary documents, policies, plans or procedures with questionable mapping status.
- vii.
-
Research any missing or unclear policies, procedures, plans or guidelines in support documentation.
- 8.
-
Develop questions and issues report for customer remediation, answers, and identification.
- 9.
-
Identify bureau standards and guidelines for document creation and evolution.
- 10.
-
Develop missing and required policies, plans or procedures, equally required such as:
- •
-
Organization of Record Notice (SORN) to register the system
- •
-
Residual Risk Assessment
- •
-
Plan of Action and Milestone (POA&M)
- •
-
Any additional A&A related artifacts as function of the submission parcel, such as:
- •
-
Security CONOPS
- •
-
Security Policies
- •
-
Security Architecture drawings and documents
- •
-
Security User Security Manual and Standing Operating Procedures (USM/SOP)
- •
-
Continuity of Operations (COOP)
- •
-
Incident Response Plan
- •
-
Contingency Plan
- •
-
Configuration Management Plan
- 11.
-
Submit these developed documents for review, comment, revision, and approval.
- 12.
-
In one case all documents and questions are answered, review vulnerability scans for actual technical controls implemented versus controls documented.
- 13.
-
Develop report on controls cess.
- 14.
-
Complete required RMF certification and accreditation worksheets, documents, and forms.
- 15.
-
Develop SCA ATO Recommendation and Risk Assessment Reports, IAW the agency requirements.
- A.
-
The completed review is then submitted to the quality balls review for the internal consistency, completeness, and definiteness (3C) review.
- a.
-
The Consistency, completeness, and definiteness of the documentation are determined, and if quality standards are met, the documentation is then passed on to last submittal phase.
Read full chapter
URL:
https://world wide web.sciencedirect.com/science/commodity/pii/B9780128184271000082
Cess Methods
Leighton Johnson , in Security Controls Evaluation, Testing, and Assessment Handbook, 2016
Document Reviews
Documentation review determines if the technical aspects of policies and procedures are current and comprehensive. These documents provide the foundation for an arrangement's security posture, but are often disregarded during technical assessments. Security groups inside the system should provide assessors with advisable documentation to ensure a comprehensive review. Documents are reviewed for technical accuracy and completeness will include security policies, architectures, and requirements; standard operating procedures; system security plans and authorisation agreements; memoranda of understanding and understanding for system interconnections; and incident response plans.
Documentation review tin can find gaps and weaknesses that could atomic number 82 to missing or improperly implemented security controls. Assessors typically verify that the organization'southward documentation is compliant with standards and regulations such every bit FISMA, and look for policies that are deficient or outdated. Common documentation weaknesses include Os security procedures or protocols that are no longer used, and failure to include a new OS and its protocols. Documentation review does not ensure that security controls are implemented properly – just that the direction and guidance exist to support security infrastructure.
Results of documentation review can be used to fine-tune other testing and examination techniques. For example, if a password management policy has specific requirements for minimum countersign length and complication, this information tin can be used to configure password-cracking tools for more efficient performance.
- 1.
-
"Gap" analysis process: The initial document reviews which showtime off the assessment process ofttimes provide information to the assessor and the assessment team to utilize in providing focus and pinpoint areas of potential concern to review, examine, evaluate, and test the organisation or application nether review. These document reviews permit the cess team to identify recently repaired controls, areas of volatility in controls and protection, and potentially areas disregarded or reduced in forcefulness of control protection. This leads to a "gap assay" which can decide what requirements, operational criteria, security objectives, and compliance needs are and are non being met by the organization under review. This "gap assay" process often has been used to discover areas of weakness in policies, procedures, and reporting for systems and applications. The "gap analysis" process which I often used is described every bit follows:
- (a)
-
Review each authorization package using the 15-stride methodology outlined beneath. Using this defined process for review promotes consistency and quality of bundle analysis.
- -
-
Review current documents for abyss and accuracy, based on the established security baseline.
- -
-
Review current documents for System Security Classification and level determination.
- -
-
Catalog current documents into security areas.
- -
-
Develop mapping for current documents to FISMA, DOD IA regulations (if applicative), NIST guidance, U.s. governmental bureau regulations (if applicable), and FIPS standards (DODI 8510.01, SP 800-37, SP 800-53, SP 800-53A, FIPS-199, etc.).
- -
-
Review current documents for mapping status.
- -
-
Identify preliminary documents, policies, plans, or procedures with questionable mapping status.
- -
-
Research any missing or unclear policies, procedures, plans, or guidelines in support documentation.
- -
-
Develop questions and outcome report for customer remediation, answers, and identification.
- -
-
Identify bureau standards and guidelines for document cosmos and development.
- -
-
Develop missing and required policies, plans, or procedures, as required such equally:
- (i)
-
Organization of Tape Notice (SORN) to register the organisation
- (two)
-
Residual hazard assessment
- (iii)
-
Plan of Action and Milestone (POAM)
- (4)
-
Any additional Cess and Authorisation (A&A)-related artifacts equally part of the submission package, such as:
- •
-
Security Concept of Operations (CONOPS)
- •
-
Security policies
- •
-
Security architecture drawings and documents
- •
-
Security User Security Manual and Standing Operating Procedures (USM/SOP)
- •
-
Continuity of Operations (COOP)
- •
-
Incident Response Plan
- •
-
Contingency Plan
- •
-
Configuration Management Plan
- -
-
Submit these developed documents for review, comment, revision, and approval.
- -
-
One time all documents and questions are answered, review vulnerability scans for actual technical controls implemented versus controls documented.
- -
-
Develop report on controls assessment.
- -
-
Consummate required RMF certification and accreditation worksheets, documents, and forms.
- -
-
Develop SCA ATO Recommendation and Risk Assessment Reports, IAW the bureau requirements.
- -
-
The completed review is then submitted to the quality assurance review for the internal consistency, completeness, and definiteness (3C) review.
- (i)
-
The consistency, abyss, and definiteness of the documentation are determined, and if quality standards are met, the documentation is and then passed on to final submittal phase.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128023242000087
Concern Continuity and Disaster Recovery in Energy/Utilities
Susan Snedaker , Chris Rima , in Business organisation Continuity and Disaster Recovery Planning for IT Professionals (Second Edition), 2014
NERC CIP-009 recovery testing
NERC CIP Reliability Standards require ABC to perform a number of BC/DR tests and documentation reviews each twelvemonth under Standard CIP-009—Recovery Plans for Critical Cyber Avails. Cyber assets are defined equally Information technology avails which communicate via routable protocols in guild to control (noncyber) Critical Assets required to operate the Bulk Electric Organization. Specifically, ABC is required to:
- •
-
Annually review recovery plan(due south) for Disquisitional Cyber Assets, which (one) specify the required actions in response to events or atmospheric condition of varying duration and severity that would activate the recovery plan(due south), and (2) include processes and procedures for the backup and storage of information required to successfully restore Critical Cyber Avails; for example, backups may include spare electronic components or equipment, written documentation of configuration settings, tape backup, etc.;
- •
-
Exercise the recovery plan at least annually. An do of the recovery plan(s) tin can range from a paper drill, to a full operational practice, to recovery from an actual incident;
- •
-
Update recovery plan(south), every bit function of its change control procedures, to reverberate whatever changes or lessons learned as a result of an exercise or the recovery from an actual incident. Updates must be communicated to personnel responsible for the activation and implementation of recovery plan(due south) inside 30 calendar days of the change being completed;
- •
-
Annually test data essential to recovery that is stored on backup media to ensure that the data is available; exam can be performed off-site.
For both the recovery program practise and fill-in media examination, ABC holds annual meetings with cross-functional CIP operational areas to complete standard forms which demonstrate the annual exercises/tests occurred. For each CIP network, different Critical Cyber Assets are identified each twelvemonth for testing. If whatever actual incidents occurred during the yr which caused the Cyber Nugget to have an unplanned outage, ABC documents actual recovery steps and ensures whatever lessons learned are incorporated into the existing recovery plan or associated nugget recovery procedures. Otherwise, a tabletop do volition exist scheduled and then that participants walk through a fictional incident and document roles, responsibilities, and tasks involved in a simulated recovery operation. Roles may include both internal staff and outside vendors, consultants, or other support staff. Acceptance (or validation) criteria, used to determine successful restoration, are documented. Procedures used to determine if acceptance criteria were met are documented. Bear witness demonstrating acceptance criteria, such as screen shots, are inserted. Corrective actions, based on outcomes, are likewise documented. Finally, the recovery exercise is signed by all participants, approved by management, both in advance of the exercise and upon completion, and archived for 3 years.
For the annual backup media test, ABC identifies unlike CIP Cyber Assets within each CIP operational expanse each year. For this set of Cyber Assets, a form is completed documenting test objective, acceptance criteria, initial conditions (i.e., state of the Cyber Asset prior to testing), sequence of events, duration, expected response, participants and evaluators, emergency termination atmospheric condition, test results (e.g., screen shots showing evidence of restored of information from restored backup media, etc.), validation that termination criteria were met (i.e., Was the test successful or not?), cosmetic activeness needed (if the test wasn't successful), and any lessons learned.
If cosmetic action is needed, a corrective action plan is documented elsewhere on the class. Once again, management approves at two stages: after programme completion (before testing begins) and after acceptance criteria are documented (including whatsoever corrective activity plan). Acceptance criteria tin can range from being able to read/restore configuration files for hardened devices, such equally network switches or programmable logic controllers (PLCs), to the ability to read/restore system and data book backup files/catalog for a database, application, or other blazon of server.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124105263099773
Security and Trust Assessment, and Design for Security
Swarup Bhunia , Mark Tehranipoor , in Hardware Security, 2019
xiii.4.4 Penetration Testing
A penetration test, or intrusion test, is an attack on a system with the intention to notice security weakness. Information technology is ofttimes performed past expert hackers with deep knowledge of the arrangement architecture, blueprint, and implementation. Roughly, the penetration testing involves iterative applications of the following three phases: assault surface enumeration, vulnerability exploitation, and result assay.
13.4.4.ane Attack Surface Enumeration
The first task is to place the features or aspects of the system that are vulnerable to attacks. This is typically a artistic process involving a number of activities, including documentation review, network service scanning, and even fuzzing, or random testing.
13.iv.4.2 Vulnerability Exploitation
In one case the potential assaulter entry points are discovered, applicative attacks and exploits are attempted against target areas. This may require inquiry into known vulnerabilities, looking up applicable vulnerability class attacks, engaging in vulnerability inquiry specific to the target, and writing/creating the necessary exploits.
xiii.iv.4.3 Issue Analysis
In this stage, the resulting state of the target afterwards a successful attack is compared confronting security objectives and policy definitions to determine whether the arrangement is indeed compromised. Note that even if a security objective is non directly compromised, a successful set on may identify additional attack surface, which must then exist deemed for with further penetration testing.
While there are commonalities between penetration testing and testing for functional validation, in that location are important differences. In particular, the goal of functional testing is to simulate benign user behavior and (mayhap) accidental failures under normal environmental conditions of design performance, as defined by its specification. On the other hand, the penetration testing goes exterior the specification or the limits set by the security objective, and simulates deliberate attacker behavior.
The efficacy of penetration testing critically depends on the ability to identify the attack surface in the start phase previously discussed. Unfortunately, rigorous methodologies for achieving this are lacking. Following are some of the typical activities in current industrial practise to identify attacks and vulnerabilities. They are classified every bit "easy," "medium," and "hard", depending on the creativity necessary. Note that at that place are tools to aid the individual in many of the activities beneath [34,35]. However, determining the relevance of the activeness, identifying the degree to which each activity should be explored, and inferring a potential attack from the result of the activity involve significant creativity.
- •
-
Like shooting fish in a barrel approaches: These include review of available documentation (for instance, specification and architectural materials), known vulnerabilities or misconfigurations of IPs, software, or integration tools, missing patches, and employ of obsolete or out-of-appointment software versions.
- •
-
Medium-complexity approaches: These include inferring potential vulnerabilities in the target of interest from information about misconfigurations, vulnerabilities, and attacks in related or coordinating products, for example, a competitor product and a previous software version. Other activities of similar complexity involve executing relevant public security tools, or published attack scenarios against the target.
- •
-
Hard approaches: These include total security evaluation of whatever utilized third-party components, integration testing of the whole platform, and identification of vulnerabilities involving communications amid multiple IPs, or blueprint components. Finally, the vulnerability enquiry involves identifying new classes of vulnerabilities for the target, which have never been seen earlier. The latter is particularly relevant for new IPs, or SoC designs for completely new market place segments.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128124772000186
Software Application Component Design Stage Verification
Jean-Louis Boulanger , in Certifiable Software Applications 3, 2018
12.2.two.three Design review
The verification of a phase requires the verification of the implementation of quality requirements (procedures application, compliance of formats, etc.), the application of processes (compliance with plans, compliance with the organization, etc.), the correction of activities and that rubber requirements are properly taken into account.
Concerning the pattern phase, methodological guides related to modeling, design conventions, architecture principles, etc., volition exist included in input documents, therefore there volition be additional verifications.
The modeling rules include:
- –
-
naming conventions for all objects (constants, global variables, interfaces, local variables, software parameters, function parameters, functions, modules, etc.);
- –
-
documentation-related rules;
- –
-
design-related rules;
- –
-
decomposition rules.
12.2.2.three.i Documentation review
The documentation review is then conducted through a quick reading (walkthrough) or a design review (formal blueprint review). The documentation review was presented in detail in section vii.ii.2.two.1.
This verification must take an objective. This objective may be formalized in the form of a checklist (command list).
12.2.2.3.2 Quality activity
This activeness has been discussed in section 7.2.2.2.ii.
12.two.two.3.3 Verifier
As already stated, the software component design verification (SwCS) must exist made by a verifier (see Figure 7.1 for the arrangement). The verifier (see Tabular array 7.3) shall verify the technical content of the document (see section seven.two.2.two.3 for more information).
12.2.2.3.four Software component design verification
As it has been stated, the description of the design of a software component must define:
- –
-
external stored information (global variables);
- –
-
the parametrization data used by the component and these functions;
- –
-
stored data (local variables), their protections and admission means.
For each office, we demand to identify:
- –
-
the interfaces existence used;
- –
-
the requirements to presume and their traceability with the requirements of the SwCD;
- –
-
the algorithms.
On this basis, it is possible to implement several verifications of the SwCS:
- –
-
verify that all the external interfaces are connected. There is at least one internal component that makes employ of the information circulating through each external interface one ;
- –
-
verify that the use of global variables is justified;
- –
-
verify that the utilize of global variables is protected 2 .
- –
-
verify that the employ of local variables is justified;
- –
-
verify that local variables are protected 3 .
And for each office of the component:
- –
-
verify that all input interfaces announced at to the lowest degree in one case in a office as input. If an input interface is not included, then information technology will not be used;
- –
-
verify that all output interfaces appear at to the lowest degree once inside a function as output. If an output interface is not included, and then it volition not be used;
- –
-
verify that all software parametrization information are at to the lowest degree included in one case in a component requirement. If a software parametrization data element were not included, it would then non exist used;
- –
-
verify that all requirements are traceable with the SwCS;
- –
-
verify that all the functions utilise interfaces as inputs, otherwise the requirement is not appreciable;
- –
-
verify that all the functions utilize interfaces as outputs, otherwise the requirement is not observable;
- –
-
verify that all requirements identify u.s.a. in which they are applicable. Otherwise, there would exist a possible ambiguity.
This is a first verification gear up that ought to be completed by your feedback. It is very important to put in place a Render of Experience (REX) arroyo, when some defects are not detected early; it is also very of import to verify if it is not possible to better the verification and the checklist used during verification.
12.2.2.3.five Software component design complexity verification
At the architecture level, the complexity was linked to the distribution of requirements onto the components. At the software component design level, the complication concerns several aspects:
- –
-
the number of requirements per function;
- –
-
the complexity of each function;
- –
-
the number of interfaces per office;
- –
-
the number of information handled by every function.
The design complication of a software component tin be analyzed through several topics:
- –
-
assay of the distribution of requirements per office: Effigy 12.ane presents an example that shows that functions F4 and F5 support more requirements than others. Information technology is necessary to verify whether this is normal. Information technology is preferable to have a counterbalanced distribution of requirements within the architecture;
Figure 12.i. Example of requirement distribution management per role
- –
-
analysis of role complexity: the projection must define metrics that must be measured from the component functions in order to be able to place complex functions. Figure 12.ii shows an example where functions F3 and F5 exhibit significant complication;
Figure 12.two. Example of functional complexity control
- –
-
analysis of interface distribution: every bit for the requirements, it is necessary to verify that the interfaces are evenly distributed onto all of the components;
- –
-
coupling analysis: it is necessary to verify that in that location is no coupling in the point architecture. Functional coupling is characterized past the number of functions in direct relationship (run across Effigy 12.iii). Nosotros must therefore measure out the coupling of each component and look for critical points. Strong coupling is similar to a spaghetti plate. It introduces a difficulty in performing integration tests and mainly in conveying out component maintenance. Each alter of function H has a lot of impact;
Figure 12.three. Example of strong functional coupling
- –
-
data coupling analysis: as shown in Figure 12.iv, a stored data element tin be used by several functions. An implicit link is seen to appear betwixt functions, which is not good for maintainability.
Figure 12.4. Example of datum coupling
12.ii.ii.three.6 Verification of requirement traceability
In order to cover the needs for requirement completeness and consistency, it will be necessary to put in place a requirement verification phase for each phase of the implementation process, such every bit divers in section 7.ii.2.two.6.
12.2.2.3.vii Checklists for design verification
Requirement verification can be accomplished using a checklist. This checklist (see, for example, Table 12.one) volition have to define control points. These latter are related to the noesis of the types of errors that can be introduced during the activity that has produced the documents to be verified.
Table 12.1. Case of checklists concerning a component
| Point | Rule | Status OK/KO | Comment |
|---|---|---|---|
| R_1 | All the requirements of the SwCD must be laid out with the SwCS or a justification must be given | ||
| R_1_1 | If a requirement of the SwCS is not traced, a judgment on the justification will have to be provided | ||
| R_2 | All the requirements of the SwCS must exist laid out with the SwCD or a justification must exist given | ||
| R_2_1 | If a requirement of the SwCS is not traced, a judgment on the justification will have to exist provided | ||
| R_3 | All interfaces introduced for this component in the SwCD are repeated in the SwCS | ||
| R_3_1 | If an external interface is not used (no connection), a justification and a control machinery must exist present | ||
| R_4 | All the parameters introduced in the SwCD are inserted again in the SwCS | ||
| R_4_1 | If a parameter is not used, there must be a justification equally well as a command mechanism | ||
| R_5 | All local variables are defined, justified and initialized |
In Tabular array 12.2, we introduced some rules related to algorithm verification.
Tabular array 12.2. Example of checklist concerning the component and algorithm
| Point | Rule | Condition OK/KO | Comment |
|---|---|---|---|
| R_60 | All algorithms are clearly defined | ||
| R_61 | All data produced in algorithms are used | ||
| R_62 | All data consumed by algorithms are produced | ||
| R_63 | Complexity for algorithms respect the design standard | ||
| R_64 | Ceremonial used to ascertain algorithms are completely defined |
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781785481192500121
Security Compliance Management and Auditing
Jason Andress CISSP, ISSAP, CISM, GPEN , Mark Leary CISSP, CISM, CGIET, PMP , in Edifice a Practical Data Security Program, 2017
Actions
|
|
|
|
|
|
Read total affiliate
URL:
https://www.sciencedirect.com/science/article/pii/B978012802042500010X
Meeting the Challenges of Enterprise BI
Steve Williams , in Business Intelligence Strategy and Big Information Analytics, 2016
7.2.5 Challenge: BI Managed Under Typical It Policies and Methods
7.ii.5.ane The IT Shared Services Mindset
ITIL is a prepare of practices for Information technology service management (ITSM) that focuses on aligning IT services with the needs of business. ITIL describes processes, procedures, tasks, and checklists which are non system-specific, just can be applied past an system for establishing integration with the organization's strategy, delivering value, and maintaining a minimum level of competency.
Wikipedia.
Managing Information technology is a complex endeavor that encompasses nugget management, operations management, planned maintenance, reactive maintenance, program and project direction, and resource management. The ITIL arroyo is a shared services arroyo to delivering IT services beyond multiple IT projects, including BI projects. In a steady land world, an IT organization could be staffed to run into known plus reasonably predictable operations and maintenance needs. In a dynamic globe where IT has to adapt to business contest, technological innovation, and evolving business operations, a chief challenge is to staff dozens if not hundreds of It projects. These projects require a diverse ready of technical skills that must be available in the right quantity at the right time and so that all projects accept the skills needed to accomplish the technical work.
One approach would be for every projection to staff just for its own needs, but that would create idle capacity at various points in a system development lifecycle. In society to minimize the costs of excess resource, many IT organizations take adopted an It shared services model, which is substantially a matrix management approach applied to IT. Organizational blueprint experts take known for years that matrix management is the most complicated form of arrangement to manage—due to resource scheduling complication and resources availability conflicts between projects. In a shared services world, there is also a conflict between: (1) Information technology service standards and policies intended to optimize service excellence; and (2) the more delivery-oriented globe of project managers and the business organisation units they serve. 1 result is that IT project managers cannot truly command schedule performance or the technical methods used. Another outcome is that the IT people have to serve more than than one supervisor—the manager of the detail shared service and the project manager or managers for the projection to which they are assigned.
We'll use Tabular array seven.1 to illustrate the relationship between available IT services nether the shared services approach and a theoretical portfolio of projects.
Table vii.ane. Scheduling Information technology People Under the Shared Services Approach Tin Be Complex, Time-Consuming, Subject to the Difficulties of Estimating Required Work Efforts By Job Type, and Prone to Resources Schedule Conflicts
| Company Information technology Projects | Project 1 | Projection 2 | Project three…. | …..Project N |
|---|---|---|---|---|
| Enterprise It Services (Representative Subset) | ||||
| Information Needs Identification and Refinement | ||||
| Source System Reverse Technology | ||||
| Information Model Development (logical and physical) | | | | |
| Data Architecture Cess | ||||
| Data Integration Design and Development | | | ||
| BI/Analytics Awarding Design and Development | ||||
| Data Governance Policy Adherence | ||||
| Information Dictionary and Meta Data Management | ||||
| Master Data Identification and Management | ||||
| Data Provisioning | ||||
| Data Connectivity | ||||
| End to Terminate Back up for SDLC for DW/BI Projects | ||||
| Disaster Recovery Design for Data Warehouse/Other Data Stores | ||||
| Archiving for Information Warehousing |
The left-hand column lists all the unlike types of It services bachelor for It projects, including BI projects. The triangles are used to announce two It resources; #one is a data modeler and #2 a information integration designer. We see that the information modeler is assigned to three projects, and the data integration designer is assigned to two projects. If it were to turn out that the data modeler is spread across also many projects, or if his or her skills are required at the same time for two or more different projects, then the data model deliverables will be delayed for one or more projects. At that place is a dependency betwixt data models and information integration designs, so if the information model is delayed, the data integration designer may non be able to get-go or complete his or her piece of work on fourth dimension. That delay and then cascades through the rest or the project lifecycle. More broadly, there are many such dependencies betwixt the diverse IT services during a typical project lifecycle, so if any one or more service does not have acceptable chapters, or if the services are existence optimized for their own sakes, or if the people providing the services are not solid performers, and then projects get delayed. To farther complicate matters, the number of people needed by a given projection would vary co-ordinate to the project, and the services needed would likewise vary past project. These factors make for scheduling challenges and inhibit the ability of project managers to control the resources they need to go the piece of work washed.
The example we used is a simple one. Imagine the complexity of trying to align IT resources across dozens or hundreds of projects. Nether the shared services arroyo, a BI initiative and its projects would be i client amidst many. Accordingly, the pace at which the BI projection can proceed hinges on the availability of the right IT people, who might exist simultaneously serving multiple projects. The pace would besides depend on how the various people arroyo their jobs. For example, the director of Information Model Evolution may aspire to building a "globe form data modeling organization" and not exist willing to have data modelers adopt the 80–20 dominion for the BI project. More broadly, all of the Information technology service managers may exist trying to optimize their function, every bit opposed to optimizing schedule or technical performance on any given project. It is akin to all teachers giving big homework assignments because they each recollect their subject is the most important and they don't coordinate/don't intendance to coordinate to avoid an unfairly agin impact on the students.
We tin think of an Information technology shared services organisation every bit a job shop. In a job shop course of product manufacturing, different machines are used in different mixes to make a big variety of possible cease products in response to order flows that are highly variable. This type of manufacturing is the nigh complex from an society sequencing and automobile scheduling perspective. In an IT services arrangement, unlike IT people with unlike skills and skill levels are used in different mixes beyond multiple projects. As a chore shop, an Information technology services system has to grapple with the challenge of managing the mix and quantity of skills it has bachelor for the various projects it has to execute. While lesser-upwards labor estimates for each project are developed as part of the It capital letter planning process, there is substantial variability in how long the It people will take to have to perform their services. For example, how long should it take to develop a data model? And what happens to a BI project schedule that assumed an IT resource would be bachelor half-time and that resource is not as bachelor or is non available at the correct times? There are too factors outside the command of the It service provider, and at that place are variations in performance between service providers. Arguably, resources planning in the IT world is even more complex that in the high-mix, low-book manufacturing globe, and the result is that schedule adherence and quality are hard to come across at the aforementioned time. This ends up slowing down BI application development projects unless scope is allowed to be reduced.
7.two.5.2 Best Practices Evolution Methodologies for IT Projects and BI Projects are Unlike
Given the complexity of It, companies have to use rigorous development methods. This ensures that systems work every bit required by the business concern, that they don't interruption anything that is already working, that they can be maintained, that they are well-documented, and that new systems or applications piece of work in the existing technical environment. Accordingly, companies tend to standardize a arrangement development lifecycle methodology (SDLC) and use information technology on all IT projects. This too ensures that all IT people employ a common approach, which allows any given person to be used on any given project. Additionally, many companies also have a formal projection management methodology with its own gear up of deliverables.
While there is no doubt that an SDLC is necessary, at that place are technically and organizationally valid reasons why a standard IT SDLC should be substantially modified and streamlined in the example of enterprise BI initiatives. Basically, the impacts of using an inappropriate SDLC for BI projects include:
- •
-
excess costs incurred for work that is not needed for effective, high-quality BI development results;
- •
-
schedule delays due to phase gates and documentation reviews that are not aligned with best practices BI stage gates and documentation types;
- •
-
excess costs associated with having to justify exceptions to IT managers who are not BI people and who may take legitimate but conflicting organizational objectives; and
- •
-
excess costs and schedule delays due to having to suit the BI project to the "all-time practices" goals of managers of Information technology shared services.
Our point here is that in using a standard IT SDLC, it makes sense to tailor it for BI initiatives because some of the standard It SDLC activities or deliverables do not add value and are non required to develop and deploy a BI application or information surround of suitable quality. That said, there seems to be an organizational bias in many organizations to avoid request for exceptions to the SDLC. This slows BI application development and adds price.
7.ii.5.iii What is Existence Optimized?
One of the biggest criticisms of BI initiatives is that BI projects take too long and toll too much. The criticisms come from business sponsors who are frustrated because what should be simple from a technical perspective is fabricated irksome and hard because the processes for BI development and delivery are not beingness optimized. The cardinal consequence with managing a BI initiative using standard It policies and methods is lack of goal congruence between how Information technology needs to operate and how BI projects can be executed most effectively. IT is optimized for command, risk minimization, and cost minimization—a careful, deliberate, and time-consuming mode of performance. BI is optimized for speed of delivery and business-driven value creation. Within such an environment, BI projects tin only go as fast as broader It policies, practices, and procedures permit.
From a general management perspective, the most straightforward way to resolve this inherent conflict of interests would exist to create an autonomous BI organisation with its ain policies, people, and It avails—hardware, software, and tools. The only truly necessary interface between a BI unit of measurement and the It organization is around acquiring data needed for BI purposes, subject to appropriate security measures. Once a BI unit has the information it needs, its designers, developers, analysts, then along can execute BI projects quickly and effectively in concert with the business concern people sponsoring the project. All this is not to say that the BI unit of measurement tin can be allowed to operate as a "rogue unit of measurement." BI and information warehousing are mature technical fields with proven methods, and the BI unit needs to be held to the highest professional standards.
Read full affiliate
URL:
https://www.sciencedirect.com/science/article/pii/B9780128091982000075
Source: https://www.sciencedirect.com/topics/computer-science/documentation-review
Post a Comment for "What Is the Purpose of Writing and Reviewing Accurate and Complete Incident Reports"