Chinese telecom company ZTE has proposed third-party validation to relieve national security concerns, says a ZTE executive.
“We will partner with an independent third party, a kind of laboratory, to validate ZTE’s products including hardware and software,” Dai Shu, vice director of ZTE’s Corporate Branding and Communication Department, told Xinhua on Tuesday.
“This independent third party or laboratory will be under the oversight of the US government,” he added. (Xinhua)
The House Intelligence committee report, which is at the center of all this, did address this issue (pages 4 – 6 of the report), yet ultimately rejected it on various grounds. I found this to be perhaps the weakest section of the entire report in terms of persuasiveness, although since I’m not well versed in the applicable technology, I can’t say with any certainty if the report’s conclusion is reasonable or not.
Here are a few of the reasons given in the report for why testing wouldn’t work.
I found this one curious:
For a variety of technical and economic reasons, evaluation programs as proposed by Huawei and ZTE are less useful than one might expect. In fact, the programs may create a false sense of security that an incomplete, flawed, or misapplied evaluation would provide. An otherwise careful consumer may choose to forego a thorough threat, application, and environment-based risk assessment, and the costs such evaluations entail, because an accredited outside expert has “blessed” the product in some way.
Um, what? If the program is “incomplete, flawed or misapplied,” then it shouldn’t be certified. We’re talking about national security here, how about mandating high(er) standards? Then if the program is kosher, it can be relied upon for good cause. Am I missing something here?
I also didn’t quite understand this objection:
One key issue not addressed by standardized third-party security evaluations is product and deployment diversity. The behavior of a device or system can vary wildly depending on how and where it is configured, installed, and maintained. For time and cost reasons, an evaluation usually targets a snapshot of one product model configured in a specific and often unrealistically restrictive way.
I found this to be the most persuasive argument on this issue, but that’s compared to these others which seem rather shaky. This is obviously way out of my league in terms of tech expertise, but if someone is testing a product for malicious code, is a different configuration or use really going to render testing useless? Moreover, technology always changes, but we have to draw the line somewhere. This seems like a bit of a stretch to me, but again, way outside of my comfort zone.
Here’s an objection I like to call the “Sinister Huawei Repairman Problem”:
The evaluation of products prior to deployment only addresses the product portion of the lifecycle of networks. It is also important to recognize that how a network operator oversees its patch management, its trouble-shooting and maintenance, upgrades, and managed-service elements, as well as the vendors it chooses for such services, will affect the ongoing security of the network.
So the product might be fine at the outset, but once a patch is applied, all kinds of crazy shit might happen. That’s a pretty tough limitation that renders any product testing useless. Now you’re basically talking about the entire life of the product. Going by the rest of the report, though, maybe that’s the type of restriction they want.
Then there are the conflict of interest and “government can’t do it” objections:
Vendors financing their own security evaluations create conflicts of interest that lead to skepticism about the independence and rigor of the result. A product manufacturer will naturally pursue its own interests and ends which are not necessarily aligned with all interests of the consumers. A different, but related, race to the bottom has been noted for the similarly vendor-financed Common Criteria evaluations. The designers of the Common Criteria system understood this danger and implemented government certification for evaluators. The precaution seems mostly cosmetic as no certification has ever been challenged or revoked despite gaming and poor evaluation performance.
I’m well aware of the possible conflicts. I remember when China used to be awash in questionable ISO 9000 consultants, evaluators — hell, an entire industry. But when that doesn’t work, government has to get involved, restricting who may issue certificates and either certifying the evaluation guys or doing it directly itself. The report even says that the UK will be going the latter route but mysteriously says that it isn’t clear whether the U.S. can replicate the UK system. Why not?
Finally, the report basically gives up from a technical standpoint:
The task of finding and eliminating every significant vulnerability from a complex product is monumental. If we also consider flaws intentionally inserted by a determined and clever insider, the task becomes virtually impossible. While there is a large body of literature describing techniques for finding latent vulnerabilities in hardware and software systems, no such technique claims the ability to find all such vulnerabilities in a pre-existing system.
You know, that’s not all that persuasive. I’d like to hear a few tech experts tell me that before I believe it. Yes, I’m sure that there’s no way to guarantee that all vulnerabilities can be found, but what are we talking here, 80%, 90%, 99.99% of potential risks? Nothing’s perfect, and if the UK thinks testing can work, I’d like to know more.