14940

Formal Software Verification that Allows for Safe, Explainable, and Adversarial Robustness

Katz Guy, HUJI, School of Computer Science and Engineering, Computer Science

 

 

Category

Explainable Artificial Intelligence, Neural Networks, Deep Learning

Markets

Autonomous Systems, Safety-Critical Systems, FinTech, Healthcare, Insurance

 

Problem

Software systems powered by Artificial Intelligence (AI) are increasingly penetrating complex real-world environments. In these environments, human driven software development is no longer tenable, and sensitive decisions are being driven and automated by outputs from a machine. Machine learning methods are being used to generate a code base, but the how and why the machine is making decisions is still very much considered a black box of opaqueness.

For example, how would an autonomous driven car determine whether a yellow line is a lane divider or merely a yellow fence? In the example below, just a slight change in the white pixel input, causes the system to misclassify the traffic light has a kitchen oven.

 

14940_katz.jpg

 

Adversarial example: slight input perturbations can cause misclassification

These decisions have real life and death consequences, yet most of these neural networks have no means by which to guarantee that the decisions are made correctly every time. Furthermore, what about deliberate nefarious attempts to game these neural networks? What prevents humans from deliberately trying to trick the system?

Our Solution

Software failure in autonomous and safety-critical systems can be catastrophic. The research group of Dr. Guy Katz has developed and optimized methods to classify definitively the functional parameters of complex software. Such formal classifications ensure that edge cases -which do not conform to cases in the code base—won’t cause the software to fail. Alternatively, his methods can identify and notify humans about the edge case (rather than making a decision autonomously).

The novel methods of Dr. Katz enable the following advantages:

  • System Safety – Network must always uphold its properties
  • Adversarial Robustness – Verify that the network is immune to small perturbations
  • Network Simplification – reduction without performance degradation. Accurate trade between size and accuracy
  • Explainable – checks the hypothesis under verification. (e.g. credit history explanation)

 

Opportunity

Target domains: critical systems, reinforcement learning systems, NLP, finance, insurance, medicine, vision, autonomous vehicles, and many others

 

 

 

 

Contact for more information:

Anna Pellivert
VP, BUSINESS DEVELOPMENT
+972-2-6586697
Contact ME: