AI is changing the world from health to transport. 

JEFF WISE has written about how after the Second War some systems got so complex that an entirely new safety protocol emerged, abandoning the old way, which was to-

To guarantee the the safety of each component in a system, in favour of evaluating and ensuring a safety of interactions between components.

This new protocol, called the system theoretic process analysis [STPA], was the brainchild of an aeronautics professor called Nancy Leveson. The question is whether AI processes are susceptible to the same evolution or are simply far too complex, Opinion is split.

Some experts think that we, are merely at a very early stage of AI and that, just as in the early days of computing very few people understand the whole system. Instead, individual understand tiny parts of the process.

To establish trust among human users, it seems that certain that some AI processes in critical area like medicine, justice, autonomous vehicles, planes [to name a few] will need to be able to explain why they decided as they did.

Deepmind has plenty of competition in the AI eye-scanning market. But what may set apart is the ability to explain the diagnoses it reaches, evaluating each of the tens of millions of pixels in each scan.

Explainability, in this case, moves from an arcane theoretical concept to a critical competitive advantage in an market already worth billions of dollars each year.

The World Students Society thanks author and researcher Harry de Quetteville.


Post a Comment

Grace A Comment!