Between 2001 and 2004, I reviewed consumer and enterprise anti-virus products in my role as a technology journalist for UK publications. Industry leaders have since said that these early anti-virus reviews shaped today’s security testing industry.
The reviews were written when the technical and commercial environment was very different to today. They are referenced here for professional context only, not as current opinion or endorsement.
Early anti-virus reviews: What anti-virus testing looked like at the time
- Detection was primarily signature-based, with early and unreliable heuristics
- Product update frequency was measured in hours, days or even weeks
- Testers manually constructed and curated malware test sets
- False positives frequently caused system instability
- Performance impact was a primary concern on low-resource machines
- Independent testing was limited and vendor transparency low
Selected observations from that period
- False positives often caused more operational damage than missed malware
- Heuristic detection promised proactivity but frequently destabilised systems
- Centralised management was emerging but fragile at scale
Why this still matters
Many structural problems in security testing today are refinements of issues that existed then, not new phenomena. These early experiences directly informed my later focus on adversarial testing, failure modes and realistic evaluation. Those early anti-virus reviews were a necessary fact-check on the best available protection products, but we’ve come a long way since then.
Evolution of testing
Some industry insiders had specific rules that they expected responsible testers to follow. These included:
- Do not allow threats access to the internet
- Do not create new threats
- Only use prevalent threats
Increasingly security products required internet access to enable their highest protection capabilities. This challenged the no-internet rule. Denying internet access reduced the threats’ ability to spread over the internet, but it also hampered the security product under test.
Creating new threats became necessary when testers needed to challenge the anti-virus products’ claims of stopping hackers. Attackers would make modifications to existing threats, so testers needed to copy them to make realistic tests.
The challenge with testing only with prevalent threats was, who decides what threats are prevalent? This problem still exists today. If a security vendor’s product fails to detect a certain threat it can claim that it’s not common enough to be a problem. This claim is hard to test.
Ultimately testers evolved their methods to often include live internet connections; to create modified or new threats; and to either try to address the prevalence issue, or to ignore it completely.