Skip to content

Eyeglass Security: Good, Conscientious Intentions, But…

I shared in a teachable security experience moment at home the other day. Ms. Context had recently had an eye exam and needed a copy of her prescription to get a new pair of glasses. She called the doctor’s office; no problem, they said: we can email it to you.  This seemed quick and convenient. The “fun” started with a subsequent message in her inbox bearing an unexpected and, er, cryptic and concise title something like “encrypt”.  Recognizing the from address as corresponding to the doctor’s office (rather than a malware sender who might have sought to tempt with the same title), we continued. We registered an account with the proffered email encryption service provider, creating a password that will probably never have an occasion to be reused, also answering familiar life questions. After a few minutes, we successfully obtained and extracted the prescription-bearing PDF, free to email to an optician.

Ms. Context found these hoops frustrating; other patients might just have given up and waited for postal mail. As a security technologist, I could understand the rationale for encrypting the email; we’re dealing with a patient’s medical data here, after all, where privacy is fundamental and HIPAA regulations speak loudly to healthcare providers.  I’ve also had a hand in supporting the concept of Internet email encryption since the pre-Web era, as via this RFC. Nonetheless, it was hard to see that the balance of cost and inconvenient processing vs. tangible benefit clearly added up in this particular case.  What’s the actual threat?  It doesn’t seem likely that an attacker would find much value in intercepting email to become able to obtain a pair of glasses that probably wouldn’t match their own eyes.  There might be cases where people wouldn’t want specifics of their vision revealed (maybe if they’re approaching the limits of visual acuity required for a driver’s license?); while conceivable, these also seem fairly unusual.  I don’t know if or how the decryption service’s design might or might not expose protected data to insiders there, but that could become another threat to evaluate in the overall picture that wouldn’t arise if the service weren’t involved.

Security policies are normally and appropriately conservative, and medical offices should certainly be careful when storing and sending patient data.  (I’ll also recommend dialing carefully when using fax machines, but that’s another topic.)  For this example, though, many or most patients might not consider this piece of their data as particularly sensitive (vs., e.g., prescribed medications they may be taking). Security methods should be effective and also convenient to use, but instead seemed burdensome in this case. I wish (and continue to believe that) the technology could become easier to apply, so users’ data could be protected as usual practice.  Where we stand, though, it can often be much easier to see and resent the tangible annoyances that security methods impose than to value the more amorphous benefits that they’re meant to offer.

Trusted – by who, why, for what?

All too commonly, security architectures are simplified, dividing entities and components into exactly two categories: the trusted and the untrusted.  Sometimes the trusted are distinguished by their placement (e.g., one or the other side of a firewall), or sometimes by possession of a key.  It’s problematic, however, to have exactly two and only two categories.  It’s also important to understand who is trusting a particular entity, what their basis for that trust is, and what properties are being trusted.  The fact that you’re able to verify the identity of a site that you’re communicating with can help to place it in the context of a relationship or reputation.  Certification authorities (CAs) can be important intermediaries in enabling that verification, constraints outside this post’s scope notwithstanding.

The ability to authenticate an identity, though, isn’t sufficient to ensure that it’s an identity that you’ll necessarily want to have anything else to do with.  Even if you do want to interact with it, the fact of its authentication doesn’t guarantee that its processing will do exactly what you want and expect, and without adding and performing additional operations that you didn’t request or anticipate.  I might choose to trust entity A and not entity B based on information that I’ve obtained or experienced, or you might make the opposite choice. The fact that both A and B have certified keys can provide useful input to overall trust decisions but doesn’t render those decisions moot. 

Authentication Based on Shared (Semi-)Public

Few systems, whether technological, economic, or otherwise, are likely to operate as intended or desired if their fundamental assumptions are flawed or unsatisfied.  Many authentication methods rely on the premise that some value (a “something you know”, like a password) can be presented only by the valid holder of an identity; the holder and verifier share what’s assumed to be be a secret value not known to others.  (It’s actually common and technically preferable for the verifier to maintain an identity’s validation data in a different form, like a hash, but I’ll elide that refinement for now.)

Things get qualitatively weaker and worse when the shared “secret” value isn’t actually secret, even though it’s implicitly expected to be.  US Social Security numbers (SSNs) provide an example here; despite regulations covering their scope of use, they can still fall into the hands of identity thieves. Knowledge of information, whether authorized or not, can become increasingly broad over time but rarely narrows once it’s been exposed or shared. It strikes me that use of an SSN (or a birthdate, relative’s name, or comparable personal attribute) as a basis for authentication reflects a confusion between two properties:

  • something that someone’s expected to know about themselves
  • something that no one else is expected to know about someone

They’re related, but clearly aren’t the same.  They’re confused at our peril.

Breach disclosures: negatives unproven

I think it’s fair to generalize that most people have become somewhat numbed to the ongoing flow of data breach disclosures, as are nicely tabulated by the Identity Theft Resource Center, and/or have just remained oblivious to them.  I think it’s a Good Thing for data holders to be accountable for disclosure reporting, both as a measure of protection for the individuals impacted and as a public disincentive for ineffective practices.  That said, though, I always wonder how many other breaches occur but go undetected or unreported.  The fact of “lost” data can be tricky to detect, since it’s a type of good that can often be copied without being removed from its original source, and that doesn’t stand out thereafter as missing from that source. If I’m understandably concerned about the safety of my information, the fact that a particular data holder may have had a past breach doesn’t speak directly about the safety of its current practices; in fact, a past disclosure could well have served as a “wake-up call” motivating improved safeguards. On the other hand, a holder’s absence from a disclosure list doesn’t prove that no data has been or could later be breached from that holder.

The Authentic and the Dead

I’ve seen some discussion, as at this NPR Digital Afterlife story, about the issues that arise when a digital account holder dies or becomes incapacitated.  This is perhaps the ultimate “boundary condition”, and isn’t a case that’s likely to be top-of-mind for a subscriber or provider when a live account is established, but is probably an area where legal and technical practice will need to evolve in tandem.  Security professionals generally discourage sharing of passwords or accounts, but is it appropriate to inherit them?  If not, what should their disposition be?  Is a court’s approval required before an inherited password becomes acceptable for use?  In technical terms, some of these cases could be modeled by delegating authorization from a prior account holder to a new successor, thus maintaining a distinction about who’s authenticating and acting at what time, but comprehensive delegation technologies haven’t become pervasive. As it stands, we may find ourselves in the position of authenticating with the identities of those who no longer can, which may prove anomalous both in terms of technical security models and in terms of the legal and societal context in which those identities existed.

On Security in the Media, and Falling Skies

When a new acquaintance asks me what I do professionally, and I cite Internet security and privacy in reply, it’s been almost universal that the next round in the conversation will contain one or more of the following elements, to paraphrase: “that’s really important”, “that must keep you busy”, and/or “I’m glad someone’s trying to do that job”, each of which reflect public interest and encouragement.  I usually respond that it’s a field that may be unique in IT for the extent to which news stories consistently inform and reinforce general awareness and interest.  Perhaps inevitably, however, most coverage concerns security or privacy failures; there’s less motivating news value in reporting about attacks that were successfully prevented without incident.  This contributes to a perception that security practice is imperfect (which it is) and therefore failing (a judgment which may or may not follow from the premise), rather than successfully providing protection that valuably reduces at least some aspects of risk (which, I believe, it does). Technologies and practices need to evolve as threats do, but it seems important to remain mindful of the fact that at least some of the sky is not falling.

On Security for Humans

I wish it weren’t so, but security practitioners and their results haven’t always been popular with the people who use the systems that they create and manage.  Few humans, e.g., are delighted by mandates to create, remember, and enter large numbers of long, unique, and non-mnemonic passwords, even though (small consolation?) that probably remains an easier task than performing strong mental cryptography. It seems alienating and presumptuous to declare that acceptable practice requires unrealistic behavior, rather than providing technologies that offer strong security without burdening users.  After all, machines are supposed to serve people, not the other way around.