c. Number of defects per million lines of source code without comments
d. The probability of failure-free operation in a specified time
66. d. Software quality can be expressed in two ways: defect rate and reliability. Software quality means conformance to requirements. If the software contains too many functional defects, the basic requirement of providing the desired function is not met. Defect rate is the number of defects per million lines of source code or per function point. Reliability is expressed as number of failures per “n” hours of operation, mean-time-to failure, or the probability of failure-free operation in a specified time. Reliability metrics deal with probabilities and timeframes.
67. From a CleanRoom software engineering viewpoint, software quality is certified in terms of:
a. Mean-time between failures (MTBF)
b. Mean-time-to-failure (MTTF)
c. Mean-time-to-repair (MTTR)
d. Mean-time between outages (MTBO)
67. b. CleanRoom operations are carried out by small independent development and certification (test) teams. In CleanRoom, all testing is based on anticipated customer usage. Test cases are designed to practice the more frequently used functions. Therefore, errors that are likely to cause frequent failures to the users are found first. For measurement, software quality is certified in terms of mean-time-to failure (MTTF). MTTF is most often used with safety-critical systems such as airline traffic control systems because it measures the time taken for a system to fail for the first time.
Mean-time between failures (MTBF) is incorrect because it is the average length of time a system is functional. Mean-time-to-repair (MTTR) is incorrect because it is the total corrective maintenance time divided by the total number of corrective maintenance actions during a given period of time. Mean-time-between outages (MTBO) is incorrect because it is the mean time between equipment failures that result in loss of system continuity or unacceptable degradation.
68. In redundant array of independent disks (RAID) technology, which of the following RAID level does not require a hot spare drive or disk?
a. RAID3
b. RAID4
c. RAID5
d. RAID6
68. d. A hot spare drive is a physical drive resident on the disk array which is active and connected but inactive until an active drive fails. Then the system automatically replaces the failed drive with the spare drive and rebuilds the disk array. A hot spare is a hot standby providing a failover mechanism.
The RAID levels from 3 to 5 have only one disk of redundancy and because of this a second failure would cause complete failure of the disk array. On the other hand, the RAID6 level has two disks of redundancy, providing a greater protection against simultaneous failures. Hence, RAID6 level does not need a hot spare drive whereas the RAID 3 to 5 levels need a shot spare drive.
The RAID6 level without a spare uses the same number of drives (i.e., 4 + 0 spare) as RAID3 to RAID 5 levels with a hot spare (i.e., 3 + 1 spare) thus protecting data against simultaneous failures. Note that a hot spare can be shared by multiple RAID sets. On the other hand, a cold spare drive or disk is not resident on the disk array and not connected with the system. A cold spare requires a hot swap, which is a physical (manual) replacement of the failed disk with a new disk done by the computer operator.
69. An example of ill-defined software metrics is which of the following?
a. Number of defects per thousand lines of code
b. Number of defects over the life of a software product
c. Number of customer problems reported to the size of the product
d. Number of customer problems reported per user month
69. c. Software defects relate to source code instructions, and problems encountered by users relate to usage of the product. If the numerator and denominator are mixed up, poor metrics result. An example of an ill-defined metric is the metric relating total customer problems to the size of the product, where size is measured in millions of shipped source instructions. This metric has no meaningful relation. On the other hand, the other three choices are examples of meaningful metrics. To improve customer satisfaction, you need to reduce defects and overall problems.
70. Which of the following information system component inventory is difficult to monitor?
a. Hardware specifications
b. Software license information
c. Virtual machines
d. Network devices
70. c. Virtual machines can be difficult to monitor because they are not visible to the network when not in use. The other three choices are easy to monitor.
71. Regarding incident handling, which of the following deceptive measures is used during incidents to represent a honeypot?
a. False data flows
b. False status measures
c. False state indicators
d. False production systems
71. d. Honeypot is a fake (false) production system and acts as a decoy to study how attackers do their work. The other three choices are also acceptable deceptive measures, but they do not use honeypots. False data flows include made up (fake) data, not real data. System-status measures include active or inactive parameters. System-state indicators include startup, restart, shutdown, and abort.
72. For large software development projects, which of the following models provides greater satisfactory results on software reliability?
a. Fault count model
b. Mean-time-between-failures model
c. Simple ratio model
d. Simple regression model
72. a. A fault (defect) is an incorrect step, process, or data definition in a computer program, and it is an indication of reliability. Fault count models give more satisfactory results than the mean-time-between-failures (MTBF) model because the latter is used for hardware reliability. Simple ratio and simple regression models handle few variables and are used for small projects.
73. The objective “To provide management with appropriate visibility into the process being used by the software development project and of the products being built” is addressed by which of the following?
a. Software quality assurance management
b. Software configuration management
c. Software requirements management
d. Software project management
73. a. The goals of software quality assurance management include (i) software quality assurance activities are planned, (ii) adherence of software products and activities to the applicable standards, procedures, and requirements is verified objectively, and (iii) noncompliance issues that cannot be resolved are addressed by higher levels of management.
The objectives of software configuration management are to establish and maintain the integrity of products of the software project throughout the project’s software life cycle. The objectives of software requirements management are to establish a common understanding between the customer and the software project requirements that will be addressed by the software project. The objectives of software project management are to establish reasonable plans for performing the software engineering activities and for managing the software development project.