A safety-critical system[2] or life-critical system is a system whose failure or malfunction may result in one (or more) of the following outcomes:[3][4]
- death or serious injury to people
- loss or severe damage to equipment/property
- environmental harm
A safety-related system (or sometimes safety-involved system) comprises everything (hardware, software, and human aspects) needed to perform one or more safety functions, in which failure would cause a significant increase in the safety risk for the people or environment involved.[5] Safety-related systems are those that do not have full responsibility for controlling hazards such as loss of life, severe injury or severe environmental damage. The malfunction of a safety-involved system would only be that hazardous in conjunction with the failure of other systems or human error. Some safety organizations provide guidance on safety-related systems, for example the Health and Safety Executive in the United Kingdom.[6]
Risks of this sort are usually managed with the methods and tools of safety engineering. A safety-critical system is designed to lose less than one life per billion (109) hours of operation.[7][8] Typical design methods include probabilistic risk assessment, a method that combines failure mode and effects analysis (FMEA) with fault tree analysis. Safety-critical systems are increasingly computer-based.
Safety-critical systems are a concept often used together with the Swiss cheese model to represent (usually in a bow-tie diagram) how a threat can escalate to a major accident through the failure of multiple critical barriers. This use has become common especially in the domain of process safety, in particular when applied to oil and gas drilling and production both for illustrative purposes and to support other processes, such as asset integrity management and incident investigation.[9]
Reliability regimes
Several reliability regimes for safety-critical systems exist:
- Fail-operational systems continue to operate when their control systems fail. Examples of these include elevators, the gas thermostats in most home furnaces, and passively safe nuclear reactors. Fail-operational mode is sometimes unsafe. Nuclear weapons launch-on-loss-of-communications was rejected as a control system for the U.S. nuclear forces because it is fail-operational: a loss of communications would cause launch, so this mode of operation was considered too risky. This is contrasted with the fail-deadly behavior of the Perimeter system built during the Soviet era.[10]
- Fail-soft systems are able to continue operating on an interim basis with reduced efficiency in case of failure.[11] Most spare tires are an example of this: They usually come with certain restrictions (e.g. a speed restriction) and lead to lower fuel economy. Another example is the "Safe Mode" found in most Windows operating systems.
- Fail-safe systems become safe when they cannot operate. Many medical systems fall into this category. For example, an infusion pump can fail, and as long as it alerts the nurse and ceases pumping, it will not threaten the loss of life because its safety interval is long enough to permit a human response. In a similar vein, an industrial or domestic burner controller can fail, but must fail in a safe mode (i.e. turn combustion off when they detect faults). Famously, nuclear weapon systems that launch-on-command are fail-safe, because if the communications systems fail, launch cannot be commanded. Railway signaling is designed to be fail-safe.
- Fail-secure systems maintain maximum security when they cannot operate. For example, while fail-safe electronic doors unlock during power failures, fail-secure ones will lock, keeping an area secure.
- Fail-Passive systems continue to operate in the event of a system failure. An example includes an aircraft autopilot. In the event of a failure, the aircraft would remain in a controllable state and allow the pilot to take over and complete the journey and perform a safe landing.
- Fault-tolerant systems avoid service failure when faults are introduced to the system. An example may include control systems for ordinary nuclear reactors. The normal method to tolerate faults is to have several computers continually test the parts of a system, and switch on hot spares for failing subsystems. As long as faulty subsystems are replaced or repaired at normal maintenance intervals, these systems are considered safe. The computers, power supplies and control terminals used by human beings must all be duplicated in these systems in some fashion.
Software engineering for safety-critical systems
Software engineering for safety-critical systems is particularly difficult. There are three aspects which can be applied to aid the engineering software for life-critical systems. First is process engineering and management. Secondly, selecting the appropriate tools and environment for the system. This allows the system developer to effectively test the system by emulation and observe its effectiveness. Thirdly, address any legal and regulatory requirements, such as Federal Aviation Administration requirements for aviation. By setting a standard for which a system is required to be developed under, it forces the designers to stick to the requirements. The avionics industry has succeeded in producing standard methods for producing life-critical avionics software. Similar standards exist for industry, in general, (IEC 61508) and automotive (ISO 26262), medical (IEC 62304) and nuclear (IEC 61513) industries specifically. The standard approach is to carefully code, inspect, document, test, verify and analyze the system. Another approach is to certify a production system, a compiler, and then generate the system's code from specifications. Another approach uses formal methods to generate proofs that the code meets requirements.[12] All of these approaches improve the software quality in safety-critical systems by testing or eliminating manual steps in the development process, because people make mistakes, and these mistakes are the most common cause of potential life-threatening errors.
Examples of safety-critical systems
Infrastructure
Medicine[13]
The technology requirements can go beyond avoidance of failure, and can even facilitate medical intensive care (which deals with healing patients), and also life support (which is for stabilizing patients).
- Heart-lung machines
- Mechanical ventilation systems
- Infusion pumps and Insulin pumps
- Radiation therapy machines
- Robotic surgery machines
- Defibrillator machines
- Pacemaker devices
- Dialysis machines
- Devices that electronically monitor vital functions (electrography; especially, electrocardiography, ECG or EKG, and electroencephalography, EEG)
- Medical imaging devices (X-ray, computerized tomography- CT or CAT, different magnetic resonance imaging- MRI- techniques, positron emission tomography- PET)
- Even healthcare information systems have significant safety implications [14]
Nuclear engineering[15]
- Nuclear reactor control systems
Oil and gas production[16]
- Process containment
- Well integrity
- Hull integrity (for floating production storage and offloading)
- Jacket and topside structures
- Lifting equipment
- Helidecks
- Mooring systems
- Fire and gas detection
- Critical instrumented functions (process shutdown, emergency shutdown)
- Actuated isolation valves
- Pressure relief devices
- Blowdown valves and flare system
- Drilling well control (blowout preventer, mud and cement)
- Ventilation and heating, ventilation, and air conditioning
- Drainage systems
- Ballast systems
- Hull cargo tanks inerting system
- Heading control
- Ignition prevention (Ex certified electrical equipment, insulated hot surfaces, etc.)
- Firewater pumps
- Firewater and foam distribution piping
- Firewater and foam monitors
- Deluge valves
- Gaseous fire suppression systems
- Firewater hydrants
- Passive fire protection
- Temporary Refuge
- Escape routes
- Lifeboats and liferafts
- Personal survival equipment (e.g., lifejackets)
Recreation
- Amusement rides
- Climbing equipment
- Parachutes
- Scuba equipment
- Diving rebreather
- Dive computer (depending on use)
Transport
Railway[17]
- Railway signalling and control systems
- Platform detection to control train doors[18]
- Automatic train stop[18]
Automotive[19]
- Airbag systems
- Braking systems
- Seat belts
- Power Steering systems
- Advanced driver-assistance systems
- Electronic throttle control
- Battery management system for hybrids and electric vehicles
- Electric park brake
- Shift by wire systems
- Drive by wire systems
- Park by wire
Aviation[20]
- Air traffic control systems
- Avionics, particularly fly-by-wire systems
- Radio navigation (Receiver Autonomous Integrity Monitoring)
- Engine control systems
- Aircrew life support systems
- Flight planning to determine fuel requirements for a flight
Spaceflight[21]
- Human spaceflight vehicles
- Rocket range launch safety systems
- Launch vehicle safety
- Crew rescue systems
- Crew transfer systems
See also
- Biomedical engineering – Application of engineering principles and design concepts to medicine and biology
- Factor of safety – System strength beyond intended load
- Formal methods – Mathematical program specifications
- High integrity software
- Mission critical – Factor critical to the operation of an organization
- Nuclear reactor – Device used to initiate and control a nuclear chain reaction
- Redundancy (engineering) – Duplication of critical components to increase reliability of a system
- Real-time computing
- Reliability engineering – Sub-discipline of systems engineering that emphasizes dependability
- Safety-Critical Systems Club
- SAPHIRE – Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (risk analysis software)
- Therac-25 – Radiotherapy machine involved in six accidents
- Zonal Safety Analysis
References
- ↑ J.C. Knight (2002). "Safety critical systems: challenges and directions". IEEE. pp. 547–550.
- ↑ "Safety-critical system". encyclopedia.com. Retrieved 15 April 2017.
- ↑ Sommerville, Ian (2015). Software Engineering (PDF). Pearson India. ISBN 978-9332582699. Archived from the original (PDF) on 2018-04-17. Retrieved 2018-04-18.
- ↑ Sommerville, Ian (2014-07-24). "Critical systems". an Sommerville's book website. Archived from the original on 2019-09-16. Retrieved 18 April 2018.
- ↑ "FAQ – Edition 2.0: E) Key concepts". IEC 61508 – Functional Safety. International Electrotechnical Commission. Archived from the original on 25 October 2020. Retrieved 23 October 2016.
- ↑ "Part 1: Key guidance" (PDF). Managing competence for safety-related systems. UK: Health and Safety Executive. 2007. Retrieved 23 October 2016.
- ↑ FAA AC 25.1309-1A – System Design and Analysis
- ↑ Bowen, Jonathan P. (April 2000). "The Ethics of Safety-Critical Systems". Communications of the ACM. 43 (4): 91–97. doi:10.1145/332051.332078. S2CID 15979368.
- ↑ CCPS in association with Energy Institute (2018). Bow Ties in Risk Management: A Concept Book for Process Safety. New York, N.Y. and Hoboken, N.J.: AIChE and John Wiley & Sons. ISBN 9781119490395.
- ↑ Thompson, Nicholas (2009-09-21). "Inside the Apocalyptic Soviet Doomsday Machine". WIRED.
- ↑ "Definition fail-soft".
- ↑ Bowen, Jonathan P.; Stavridou, Victoria (July 1993). "Safety-critical systems, formal methods and standards". Software Engineering Journal. IEE/BCS. 8 (4): 189–209. doi:10.1049/sej.1993.0025. S2CID 9756364.
- ↑ "Medical Device Safety System Design: A Systematic Approach". mddionline.com. 2012-01-24.
- ↑ Anderson, RJ; Smith, MF, eds. (September–December 1998). "Special Issue: Confidentiality, Privacy and Safety of Healthcare Systems". Health Informatics Journal. 4 (3–4).
- ↑ "Safety of Nuclear Reactors". world-nuclear.org. Archived from the original on 2016-01-18. Retrieved 2013-12-18.
- ↑ Step Change in Safety (2018). Assurance and Verification Practitioners' Guidance Document. Aberdeen: Step Change in Safety.
- ↑ "Safety-Critical Systems in Rail Transportation" (PDF). Rtos.com. Archived from the original (PDF) on 2013-12-19. Retrieved 2016-10-23.
- 1 2 Wayback Machine
- ↑ "Safety-Critical Automotive Systems". sae.org.
- ↑ Leanna Rierson (2013-01-07). Developing Safety-Critical Software: A Practical Guide for Aviation Software and DO-178C Compliance. CRC Press. ISBN 978-1-4398-1368-3.
- ↑ "Human-Rating Requirements and Guidelinesfor Space Flight Systems" (PDF). NASA Procedures and Guidelines. June 19, 2003. NPG: 8705.2. Archived from the original (PDF) on 2021-03-17. Retrieved 2016-10-23.