Pulled from the pages of science fiction, self-driving cars seem to be on the way to becoming real. Cruise and Waymo are expanding their reach with their autonomous taxi experiment and Tesla (TSLA) – Get Free Report, of course, has been amping up its Full Self-Driving rollout amid repeated promises that true FSD is right around the corner.
But the technology remains heavily flawed. TheStreet reported last week that there are a number of vulnerabilities in the artificial intelligence models that power self-driving cars, not the least of which involves a lack of standardized testing platforms across the industry to ensure independently verified safe models.
Safe, human-level self-driving, however, isn’t somewhere up around the bend, according to Navy veteran and engineer Michael DeKort. The costs, he says, in human lives, time and money, are too high for true, safe self-driving to ever be achieved.
The issue for DeKort — the engineer who exposed Lockheed Martin’s subpar safety practices in 2006 — is that artificial general intelligence (an AI with human-level intelligence and reasoning capabilities) does not exist. So the AI that makes self-driving cars work learns through extensive pattern recognition.
Human drivers, he said, are scanning their environment all the time. When they see something, whether it be a group of people about to cross an intersection or a deer at the side of the road, they react, without needing to understand the details of a potential threat (color, for example).
The system has to experience something to learn it
“The problem with these systems is they work from the pixels out. They have to hyperclassify,” DeKort told TheStreet. Pattern recognition, he added, is just not feasible, “because one, you have to stumble on all the variations. Two, you have to re-stumble on them hundreds if not thousands of times because the process is extremely inefficient. It doesn’t learn right away.”




