Copyright STAT

The Q-Collar — a neck collar inspired by the woodpecker — has been worn by NFL players and thousands of young athletes. When it debuted in 2012, it originally promised to reduce concussion risk by lightly squeezing the jugular veins, supposedly stabilizing the brain. By 2019, it started to use more ambiguous language, saying the device could “protect the brain.” The company raised tens of millions of dollars and proudly advertises that it is “FDA authorized.” To most consumers, that sounds like proof. It isn’t, as a colleague and I detailed in an investigation in The BMJ recently. Advertisement Our Freedom of Information Act request revealed that FDA reviewers had serious reservations. Internal memos show staff debating weak data, acknowledging that MRI findings didn’t match real-world injury outcomes, and questioning whether the company’s statistical findings meant anything at all. Those doubts were hidden behind redactions, yet the official press release announcing authorization in 2021 sounded triumphant. The Q-Collar was authorized through the de novo pathway as a Class II device, which means moderate risk. This pathway was created to help genuinely novel devices reach patients efficiently. Consumers often assume “FDA authorized” means “FDA approved.” However, approval requires rigorous demonstration of safety and effectiveness, usually through well-controlled clinical trials. Authorization, especially through the de novo pathway used for the Q-Collar, allows marketing based on “reasonable assurance of safety and effectiveness,” when the data are suggestive but not definitive. This often includes studies where at least one key indicator of effectiveness is not met. The result of authorization is a designation that looks definitive but can mask a deep uncertainty about whether the product actually works. Advertisement To consumers, authorization may look like a binary decision: safe or unsafe, effective or not. In reality, it was a compromise — basically “close enough.” The FDA addressed its own doubts by having the company add a fine-print disclaimer: The imaging data weren’t proven predictors of future injury, and the device didn’t prevent concussions. Yet the company now advertises a supposed “66% reduction in the likelihood of brain damage.” That gray zone is where marketing thrives, and belief takes root. Each football weekend brings another concussion on national television and another round of questions about how to keep players safe. News reports of chronic traumatic encephalopathy (CTE) in former players further heightens public anxiety and encourages searches for solutions. That fear fuels a growing market for brain protection gear — from the Q-collar to products like Guardian Caps, now supported by the NFL. Although such products stop short of claiming to prevent concussions, their marketing often implies reduced risk, and many consumers understandably absorb that message. Others focus on protection from the unseen accumulation of repeated head impacts — pathological effects that aren’t readily obvious from a single game. Yet that promise is impossible to verify for any individual user, and it creates a self-reinforcing loophole: no injury ever counts as evidence of ineffectiveness, while the absence of injury is taken as proof of success. Every hard hit without a concussion becomes validation. A headache-free practice becomes confirmation. Just recently, the Jets’ star cornerback and official Q-Collar partner Sauce Gardner suffered a concussion while wearing it. Proponents of these devices can interpret such incidents not as evidence of failure, but as further proof of its necessity: if he was hurt with it, imagine how much worse it could have been without it. Once a product earns federal authorization, every new outcome tends to be read through that lens. The stamp of legitimacy doesn’t just permit belief — it promotes it. People reason that if the FDA allowed it to market, there must be something there, and every anecdote becomes retroactive validation. Advertisement That matters far beyond sports. When federal regulators lower the evidentiary bar for one category, it normalizes the same shortcuts elsewhere. Each time a product slides through on ambiguous data, it sends a cultural message: certainty is optional. The FDA’s approach here reflects our collective psychology in the U.S. We crave simple solutions to complex problems. When data disappoint, we substitute belief. Parents see a “low-risk” device and think, why not try it? For many, $200 is a small price to pay for peace of mind. Regulators see public demand and think, why stand in the way? The phrase “worth a try” becomes federal policy. That mindset also pays. Venture capital loves devices that sound medical but face lower regulatory hurdles than drugs. An “FDA authorized” label turns speculative engineering into investable science. When the gatekeeper rewards novelty over validation, it’s no surprise that marketing dollars outpace research dollars. But there is harm in false hope. Startups and sports leagues tout “science-backed” solutions, while regulators stay silent to avoid appearing anti-innovation. The result is a moral hazard: profits without proof. Every unproven product that gains official legitimacy steals attention and funding from interventions that truly help. Every time the FDA hides dissent behind redactions, it trains citizens to assume the worst. Transparency is the currency of trust; once spent, it’s almost impossible to earn back. This is not just about one device. It’s about what happens when institutions grow comfortable living in their own ambiguity and hiding behind opacity. Patients and consumers shouldn’t need a FOIA request to understand whether “FDA authorized” reflects compelling evidence or a device permitted despite serious internal concerns, addressed only through labeling caveats. Once the public senses that difference, when transparency is replaced by redaction and reassurance, trust begins to decay. And trust, once gone, is the hardest thing to rebuild. Advertisement What would genuine transparency look like? • Publish reviewer memos and voting summaries after authorization. • Require plain-language labeling that distinguishes “authorized” from “proven. • Provide a concise, layperson summary describing the strengths and limitations of the evidence, and the level of confidence regulators place in it. None of these reforms would slow innovation; they would simply let the public see the reasoning behind it. Science is self-correcting only when the evidence is visible.