Chinese Companies and U.S. Telecom: What’s Wrong With Product Testing?

October 10, 2012

ZTE sees product testing as a good solution to the U.S. concerns about national security and Chinese telecom product/service providers:

Chinese telecom company ZTE has proposed third-party validation to relieve national security concerns, says a ZTE executive.

“We will partner with an independent third party, a kind of laboratory, to validate ZTE’s products including hardware and software,” Dai Shu, vice director of ZTE’s Corporate Branding and Communication Department, told Xinhua on Tuesday.

“This independent third party or laboratory will be under the oversight of the US government,” he added. (Xinhua)

The House Intelligence committee report, which is at the center of all this, did address this issue (pages 4 – 6 of the report), yet ultimately rejected it on various grounds. I found this to be perhaps the weakest section of the entire report in terms of persuasiveness, although since I’m not well versed in the applicable technology, I can’t say with any certainty if the report’s conclusion is reasonable or not.

Here are a few of the reasons given in the report for why testing wouldn’t work.

I found this one curious:

For a variety of technical and economic reasons, evaluation programs as proposed by Huawei and ZTE are less useful than one might expect. In fact, the programs may create a false sense of security that an incomplete, flawed, or misapplied evaluation would provide. An otherwise careful consumer may choose to forego a thorough threat, application, and environment-based risk assessment, and the costs such evaluations entail, because an accredited outside expert has “blessed” the product in some way.

Um, what? If the program is “incomplete, flawed or misapplied,” then it shouldn’t be certified. We’re talking about national security here, how about mandating high(er) standards? Then if the program is kosher, it can be relied upon for good cause. Am I missing something here?

I also didn’t quite understand this objection:

One key issue not addressed by standardized third-party security evaluations is product and deployment diversity. The behavior of a device or system can vary wildly depending on how and where it is configured, installed, and maintained. For time and cost reasons, an evaluation usually targets a snapshot of one product model configured in a specific and often unrealistically restrictive way.

I found this to be the most persuasive argument on this issue, but that’s compared to these others which seem rather shaky. This is obviously way out of my league in terms of tech expertise, but if someone is testing a product for malicious code, is a different configuration or use really going to render testing useless? Moreover, technology always changes, but we have to draw the line somewhere. This seems like a bit of a stretch to me, but again, way outside of my comfort zone.

Here’s an objection I like to call the “Sinister Huawei Repairman Problem”:

The evaluation of products prior to deployment only addresses the product portion of the lifecycle of networks. It is also important to recognize that how a network operator oversees its patch management, its trouble-shooting and maintenance, upgrades, and managed-service elements, as well as the vendors it chooses for such services, will affect the ongoing security of the network.

So the product might be fine at the outset, but once a patch is applied, all kinds of crazy shit might happen. That’s a pretty tough limitation that renders any product testing useless. Now you’re basically talking about the entire life of the product. Going by the rest of the report, though, maybe that’s the type of restriction they want.

Then there are the conflict of interest and “government can’t do it” objections:

Vendors financing their own security evaluations create conflicts of interest that lead to skepticism about the independence and rigor of the result. A product manufacturer will naturally pursue its own interests and ends which are not necessarily aligned with all interests of the consumers. A different, but related, race to the bottom has been noted for the similarly vendor-financed Common Criteria evaluations. The designers of the Common Criteria system understood this danger and implemented government certification for evaluators. The precaution seems mostly cosmetic as no certification has ever been challenged or revoked despite gaming and poor evaluation performance.

I’m well aware of the possible conflicts. I remember when China used to be awash in questionable ISO 9000 consultants, evaluators — hell, an entire industry. But when that doesn’t work, government has to get involved, restricting who may issue certificates and either certifying the evaluation guys or doing it directly itself. The report even says that the UK will be going the latter route but mysteriously says that it isn’t clear whether the U.S. can replicate the UK system. Why not?

Finally, the report basically gives up from a technical standpoint:

The task of finding and eliminating every significant vulnerability from a complex product is monumental. If we also consider flaws intentionally inserted by a determined and clever insider, the task becomes virtually impossible. While there is a large body of literature describing techniques for finding latent vulnerabilities in hardware and software systems, no such technique claims the ability to find all such vulnerabilities in a pre-existing system.

You know, that’s not all that persuasive. I’d like to hear a few tech experts tell me that before I believe it. Yes, I’m sure that there’s no way to guarantee that all vulnerabilities can be found, but what are we talking here, 80%, 90%, 99.99% of potential risks? Nothing’s perfect, and if the UK thinks testing can work, I’d like to know more.

 

2 thoughts on “Chinese Companies and U.S. Telecom: What’s Wrong With Product Testing?

  1. bystander

    I spent many years as a software engineer working on routers and switches and the like — communications gear of all kinds. I worked as a product architect for a big US high tech firm that has been in and out of that business over the past couple decades. So, for what it’s worth, I’ll offer up a few possible explanations and interpretations.

    First, understand that modern communications gear is highly programmable. A switch or router is nothing but a specialized computer, typically with many specialized processors, that examine incoming communications data in one form or another (network packets, frames of data on an optical network, voice data that is encoded and transmitted between cell phones and cellular base stations, etc.)

    Like any programmable device, it doesn’t have much meaning to say that one will “test it” for security faults. Your PC does not exhibit any security violations when it ships from the factory. It is only when you download a virus that it begins to do so. Similarly, a router or switch can operate perfectly normally until either something triggers a latent backdoor — something hidden in the firmware of the device — or an update or patch or reprogramming introduces such a thing. Communications gear, like all equipment involving lots of software, has to be maintained and periodically updated. This typically comes in the form of patches to the firmware from the manufacturer.

    It seems infeasible to me to inspect or test such code for freedom from backdoors and security breaches, as a practical matter. You can observe the device as a black box, but if it indeed has a backdoor or trojan horse or what-not, you won’t know it until it it is triggered, in the same way that you don’t know about a virus that is in your PC until it is activated from the outside; unless of course it springs to live on its own.

    Complicating matters is the fact that this kind of software is very technical and complex. It is typically written in very low-level languages and delivered in machine code form (or bitstreams for programmable logic, that kind of thing). In other words, it is typically delivered in a form that is impossible to inspect visually with any reliability. One could imagine getting the source code to it and inspecting that, but it’s a huge technical undertaking. I can’t think of a precedent for this kind of testing / inspection regime. The usual rule in building secure / reliable / safe systems, like the code that manages a modern airplane for example, is to build trust into the development process of the software. You trust it because it comes from Boeing, and you have a direct window into their security procedures and processes and it is those that give you assurance that the code is safe. Treating the code as hostile and subjecting it to tests and inspections and what-not strikes me as technically infeasible, enormously expensive, and, in brief, a needless and significant risk.

    What the report says about all this stuff sounds like technical recommendations coming from people who understand communications very well, but the language naturally has been worked into a form that is appropriate for a public document like the findings of a hearing. That is, you don’t expect for such a document to go into the necessary technical depth to explain in great detail what exactly the technical barriers are. That doesn’t mean that they aren’t real. Sounds perhaps like a cliche or a dismissal, but I’ll say it anyway: the guys at the NSA are no dummies. Think about what they know about communications systems. They are the ones behind these recommendations, as I read this (reading between the lines of course).

  2. bystander

    I’d like to add to my last post a mention of the recent case of the ZTE phone that was discovered to have a backdoor. You can read about the case here: http://www.reuters.com/article/2012/05/18/us-zte-phone-idUSBRE84H08J20120518. That article does not have the actual code that one entered into the phone to activate the backdoor; it was a string of letters and numbers, like ZTE017393049586 or something (making that up, but it was something like that).

    Now, suppose you do what ZTE recommends and you test that phone, not knowing that code. What would possibly drive you to enter that sequence into the keyboard? I have no idea how the researchers who discovered it happened upon the sequence. Maybe they took the phone apart and reverse-engineered the code? I really have no idea. But the point is that one could do a very thorough test of that phone, using it to place and receive calls and messages, etc., and never have any clue that there is anything wrong with it. It took some serious sleuthing to figure out that there was such a backdoor.

    I think the fact that ZTE shipped a phone with such a backdoor speaks for itself. But a phone is innocuous compared to a switch or router or base station that is placed centrally in the communications infrastructure. There is no keypad to push random strings of digits into to do the testing, haha. And a switch/router/base station has far more memory and bandwidth and complexity in which to embed such a backdoor.