The webinar named “System Safety Verification and
Validation for AI Systems” was organized by Centre for Advances in Reliability
and Safety (CAiRS) on 4th Aug 2021.
System Safety is a complex process because it involves an integration
system of different systems to integrate their behaviors and misbehaviors.
Therefore, System Verification and Validation of AI systems become the major
challenge. Dr Dev Raheja (Adjunct
Professor (Reliability Engineering) at the University of Maryland, Mechanical
Engineering Department) was the guest speaker.
Firstly, he gave an overview that system safety is a
specialty within system engineering that supports program risk management. Its
goal is to optimize safety by the identification of safety related risks, eliminating
or controlling them by design, etc. Then he explained traditional verification
and validation model.
Then Dr. Dev Raheja briefed risk definition, risk
management and big risks as well as high level safety. Big risks related to the Time-To-Market but
danger if unknown hazard in the product. (That let me remember Samsung Note 7
case!)
After that Dr. Raheja said system reliability is the
most important component of system safety for AI system and almost all
accidents result from poor reliability. He introduced four types of AI and they
are Reactive Machines, Limited Memory, Theory of Mind and Self-Aware.
Reactive Machines perform basic operations. This is
the first stage of any AI system. A machine learning that takes a human face as
input and outputs a box around the face to identify it as a face is simple,
reactive machine. (No Input, No Learning!)
Limited memory types refer to an AI’s ability to store
previous data and/or predictions, using that data to make better prediction.
Every machine learning model requires limited memory to be created, but the
model can get deployed as a reactive machine type.
Theory of Mind AI is only beginning phases such as
self-driving cars. In this type of AI, it begins to interact with the thoughts
and emotions of humans.
Self-aware AI exists only in story that beyond the
human has an independent intelligence.
Finally,
Dr. Raheja said we need robust specification but at least 60% requirements
missed in most specifications. He also said we need several Hazard Analysis
tools and Accelerated Life Test (e.g. HALT, S-N diagram).
Lastly he pointed out the secret of success that is to
make top management responsible safety and reliability, conducting frequent
training and workshop as well as involved in audits with them. He also suggested
to get independent consultants and auditors in evaluating the effectiveness of
the system of systems.
Reference:
CAiRS
- https://www.cairs.hk/view/index.php
20210115:
CAiRS webinar - System Reliability and Maintenance - Key Success Factors for
your business - https://qualityalchemist.blogspot.com/2021/01/cairs-webinar-system-reliability-and.html
20201126:
CAiRS webinar - How Products Reliability and Systems Safety help Local Industry
- https://qualityalchemist.blogspot.com/2020/11/cairs-webinar-how-products-reliability.html
20201029:
Breakfast with Prof. Winco Yung and meet with Mr. Ben Tsang in Science Park - https://qualityalchemist.blogspot.com/2020/10/breakfast-with-prof-winco-yung-and-meet.html
沒有留言:
發佈留言