Seven For A Secret Never To Be Told

Andy Farnell

Posted on Oct 17, 2024.

Why Passwords Still Rock

Passwords are the best kind of security you can have. There, I said it. And in doing so I'm going headlong against other security thinkers out there who will tell you otherwise - perhaps because they are also part-time salesmen for software companies.

This article contains several extracts from an upcoming mini-booklet series "Boudica's little book of trust" and "Boudica's little book of systems", minus the diagrams and mathematics.

One motive is that the US American National Institute of Standards and Technology (NIST) recently published its weighty (> 30k words) SP 800-63-4 Digital Identity Guidelines. Its "recommendations" about passwords have variously been described as long, long overdue common-sense, and a justified "slap in the face" to debilitating, arbitrary and capricious password policies "dreamed up by faceless IT bullies".

That's a lot of emotion out there! People have been given the run-around over passwords for decades. What also seems long overdue is a (hopefully) accessible discussion about device and account security in general.

Seeing that NIST is finally championing common-sense is encouraging. But there is actually more to it. NIST is shifting to a recognition of the power dynamics in security and that it is the user who must determines passwords and take responsibility for them. What we'd like to do is extend that to talk about how computer security in general needs to come back to the user.

NIST is recognising that a lot of security folklore harms users. What we want to look at here in broader strokes here is what we call <iatrogenic technology. Because faceless "bullies" turn out to just be misguided administrators trying to do their best and many of the problems with passwords boil down to our addiction to "convenience".

Something is iatrogenic if it "does more harm than good". The word comes from medicine to describe "causation of disease or other ill effect by a treatment, diagnosis or other intervention".

Iatrogenic cybersecurity is commonplace. All around the world are policy-makers and IT system administrators trying to protect our stuff, but getting it wrong because the subject of authentication is complex and actually not that intuitive. They are also under extraordinary pressure since governments have decided that the solution to bad security is not better education, or research into the mathematics and game-theory of security, but fining companies that get things wrong.

At the same time we continue to give a hall-pass to companies that write rubbish, over-complex security software that really serves surveillance capitalism not the end users. We therefore see a spectrum from total non-compliance to ridiculous over-compliance and very poor fitting of measures to threat models.

To make things worse, there's loads of misinformation out there; cybersecurity folklore, marketing spew, lobbying efforts - and these feed-back into government too, including organisations like NIST, so perpetuating the cycle of poor security. This time NIST specifically set out to undo some of that misinformation and folklore.

Now, it is nice for us to be able to write some positive things about NIST since the last time we spoke about them was negatively in the context of allowing encryption standards to be compromised by NSA influence. That said, this article will stay on-point that organisations and standards are only as good as their integrity and good-faith.

The NIST guidelines are pitched softly, but in reality "compliance is required by all organisations that interact with the federal government online". That effectively means "all US organisations". In other words, what is pitched as "recommendations" are de facto new laws for Americans! We can expect a similar dynamic in Europe. And there's plenty that 800-63-4 gets wrong in our opinion and still a lot of folklore doing the rounds. I think that even in the new NIST standard, which has more precise and consistent language, its language around security models remains woolly and fundamental concepts that relate to power and responsibility remain unclear.

Anyway, about passwords….

Figure 1: "Stop! Who would cross the Bridge of Death must answer me these questions three , ere the other side he see." – The Bridgekeeper

Why passwords?

Passwords are something special in computing and security. They are a gold-standard of identity, intangible and having zero cost. They are also extremely effective. So why do we have all these other voodoo rituals… touching your phone three times while saying the last four letters of your pets name backwards?

Partly it's because we've been using passwords wrong for about the past 40 years. The new NIST document partially puts that right. It's also because there's a massive "security industry" that sells things - and you can't sell people the ability to think up a new password in their own head. Where's the profit in that?

Instead they'll tell you that you need a fangled security system of gadgets and retina scans, and that you're too stupid to be trusted with your own security. They are wrong. In most cases passwords are just fine if not better than alternatives, and in this post we're going to explain why.

Thus another theme of this essay is personal responsibility and the crux of the argument is that all security solutions which are not passwords solve problems that are not yours.

Like self-service checkouts at the supermarket that make customers into employees, they are a way of passing blame, liability, and work onto you in order to solve someone elses security problem. As Prof. Ross Anderson bluntly puts it;

"If Alice guards a system but Bob pays the cost of failure, you can expect trouble."

To investigate let's start by turning the usual explanation of "authentication" on its head. You can be authenticated by;

*something you know
*something you have
*something you are

Typical examples might be; knowledge of a secret code-word, possession of a door key, and your fingerprints.

But instead let's look at this from attackers viewpoint and consider the discoverability of these. They are now;

*something you can guess or compel
*something you can steal or copy
*something everybody can observe

Let's start in the middle.

The second "least-worse" authentication method is a physical key - something that you own.

Ownership?

So, what does it mean to "own" something?

That's a very salient question these days when quasi-communist organisations like the World Economic Forum produce statements like "You'll own nothing and be happy", and the "owners" of smartphones have practically no control over, nor the least idea about what goes on inside "their" device.

As an attacker we are focused on stealing and copying a key. It's unlikely that anyone keeps a key constantly about their person. People lose keys. Or they just put them down out of sight where as an attacker we may have temporary access to them ("evil maid attack") .

The best trick for the attacker is not to simply take a window of opportunity to use a key, but to copy it without the owner knowing. Now the key can be used at any time while the victim still feels safe because the original key remains in their possession.

Even better, if the "key" is an active device like a smart-card or smart-phone it may be possible to install a trojan. Now, in addition to stealing credentials we can manipulate the device itself.. make it stop working at a crucial time, or make it misidentify the owner to frame them or someone else.

Side-channel, optical and electromagnetic attacks on YUBI keys, smart cards, RFID and other physical tokens are getting more sophisticated all the time. Fortune always favours the attacker because vulnerabilities are found faster than common technologies can be updated. The takeaway here is that complex devices are less secure than simple craft. They expose a bigger attack surface. They hide compromise in their complexity. They provide a false sense of security that leads to over-confidence.

Coming back to the question of do you really own your devices? Even something like your car? Increasingly these things come "compromised out of the box". Hackers use the term "pwn" to mean that a computer has been completely taken control of. Things you buy today are pwned before you even unbox them. They contain spyware installed by the manufacturer which most people cannot remove. They phone home to get "updates" and are completely outside your control. However, the legal definition of ownership involves "control over property". It is hard to imagine that any electronic device sold since about 2010 meets the barest criteria for reasonable expectations of function or would satisfy the ancient Sale Of Goods Act (1893 and 1979).

Surveillance capitalism, big data, and the "telemetry" hidden inside most consumer products to cover for awful software engineering, all put consumers at risk. There is a naked conflict between your security and the profit of companies. Right now you are losing very badly and the corporations are winning hands-down in the race to turn every electronic gadget into a weapon against you.

What the NIST guidelines do for passwords is essentially to restore some ownership to the user (in the technical language the authenticator). Choosing your own password creates an obvious sense of ownership. It creates a completely different psychological context for memory which is obviously a core property of passwords. It is also a very different security model from having a random security token assigned to you - since from inception more than one party knows it. It speaks to fundamental matters of personal sovereignty.

Our slavery to others lies in our ignorance, and as Gurdjieff said, the worst slavery is ignorance of ourselves. The only thing that you can truly own is within yourself. The perfect password exists only within your mind, and has never passed your lips nor been written. With care it can be hidden even from your conscious self yet still be available on demand. This takes some self discipline and learning the methods of inner-secrets and managing secrets, which we are never taught at school.

Security models: password or tracker?

Indeed people do not discriminate two vastly different security models that should really be obvious with a moments thought. The question is, "who is the security for?"

Security schemes that ask that you carry around a device which is connected permanently to a network and uses a mechanism that is entirely opaque to you is a different kind of security. It is more than a mere access control. It is not security for you.

It may pass for "something you have" but also has a function to act as a location or close proximity biometric remote sensor for an observer elsewhere. It's a tracking device.

If you want to understand tracking then do some research on "spouseware". Tracking may be useful for pets and car keys. But for humans it is an abuse and fundamental breach of rights. Spouseware and other monitoring demeans both parties. For couples in a relationship it's security cosplay that objectifies each as "Damsel in distress" and "Knight in shining armour" respectively. It inevitably harms relationships.

For business relations it establishes inappropriate and possibly illegal power relations that are dehumanising. Delivery companies get away with claiming they track inventory, whereas they are really more interested in tracking their drivers. Security personnel given "body-cams" for their "own protections" are reduced to fleshy CCTV poles. Their job is simply to stand in the right place (usually in danger) while the machine hoovers up data.

By carrying these things we fall into a trick - the lie that these devices serve us. Taking physical responsibility for something that belongs to someone else is not the same as it being yours and having control of its use. That's a very poor standard of "ownership". If you give, do or take anything in the name of "security" make sure it is your security.

Where devices are issued that employees are expected to use for "security" the employer really should pay rent for its storage in your home and a fee for its responsible care and upkeep such as charging. Ideally such devices should be surrendered at the perimeter of the premises but are sometimes dual-use as physical access and internal network access devices. They are often BYOD, blurring the line between information system compartments. To the extent you "need" to carry it around outside work it's more like the electronic ankle tags that criminals are forced to wear.

You are more than this flesh

Now it should be easier to see that a biometric (something you are) is even worse. Its a public token. Using it as an authentication "secret" is possibly one of the dumbest mistakes imaginable. Doing so has at least two enormous problems.

Firstly, everything you are, which can be measured, is on permanent display to the world and can be taken without your knowledge. You leave fingerprints on everything you touch. Your face, iris and even retina is visible to any high resolution camera. You shed millions of skin cells containing your DNA everywhere you go. Humans literally broadcast biometric data.

Secondly, it is immutable. Unlike a password that you can change, or a physical key that you can rescind and re-issue, you cannot change your physical being. If used as a secret, compromise once is compromise forever. These are plainly undesirable properties for a "secret".

Indeed that is the point of a public token - you would want as many people to know about it as possible if you want to be widely recognised and able to prove your in-person identity in any place. It's why biometrics seems a great idea for verifying in-person (human-present) known-prior biological identities. But that requires interaction and eyes-on validation.

Unfortunately, DNA and other intimate physical data is more than just a handy public token. It reveals a presently unknown number of other secrets regarding your health, habits and history. These secrets could be used as leverage against you and so deliberate or careless dissemination of biometric data is probably inadvisable. It's like copying the entire contents of your hard drive to someone so that they can see one file.

But this kind of identity check is not authentication. As an authentication method, biometrics assumes not just a trusted computing base but a fully secure environment and the least capable attacker. These assumptions are violations of Shannon's maxim and Dolev-Yeo principles.

So biometrics is more about tracking of individuals than authentication. Indeed authentication need not be tied to identity at all. There are other trust mechanisms based on roles, transferable tokens, behaviour and much more. Tying all transactions to specific individuals facilitates totalitarianism and erodes democracy. The biometrics security business is growing very fast and lobbies governments hard. It is therefore disappointing to see the words "authentication" and "identity" used loosely within the NIST document.

Identity is not trust

Some of this might seem confusing. Surely knowing someone's identity is synonymous with knowing that they are "authenticated"? (And here we are going to be slack using "trust" and "authentication" anonymously).

No. Identity and authentication are distinct concepts. One may prefigure the other, but only in a certain context. So, before moving on to the best kind of authentication, which is passwords, let's dig deeper into what authentication is.

It's the process of determining that a supposed identity is in fact as supposed. Wow! If the above sentence seems a little tricky, that's because it is. I might have also said that "Authentication is the process of determining that a supposed identity is authentic", simply begging the question. We've also invoked two other slippery concepts, "Truth" and "Trust". Detailed discussion of both concepts are way beyond this essay, but we need to think about them a little.

Suppose that you are M from a James Bond story. As a spymaster you have trusted agents all around the world. Some of them are such sensitive assets that even you don't get to know who they really are. But what you do know is that they are wholly reliable agents who perform well. We call this anonymity and it's a much more useful and important concept than the shallow, negative idea given in the popular media.

We rely on anonymity all the time. When you order a pizza you care about a number of things like timely delivery. You need to know that the delivery driver is at the door. You need to know that it is indeed a delivery from the right company and that the pizza is for you and not your neighbour in the next apartment. All of this happens without you needing to know the delivery drivers personal identity.

The driver is a role. Indeed the delivery company may take steps to protect the drivers identity like not giving our her real name on the phone. This is information compartmentalisation. It's something that's been very hard until recently when we have advanced cryptography like "zero knowledge proofs" - something to talk about in another issue.

Cryptography has done a great deal to advance models and methods of trust and truth. It's proximity to mathematics and so to logical proofs allow us to make statements such as that something "could not reasonably be otherwise" given what we know about mathematics, computing potential and time. For example we could sign a message so that the recipient knows it could not reasonably have come from anyone else.

Anyone but whom?

Well, all methods require something called a "root of trust" and an initial "key exchange" between two parties.

It can be as simple as Alice and Bob meeting in a room with nobody else present, throwing some dice and remembering a sequence of random numbers that they agree never to tell anyone else. That is their shared secret. Crucial to the process is the in-person meeting. To the extent Alice and Bob already know each other they can both be certain that the secret is between only them, and that each is who they claim.

The real "M", Dm. Stella Rimington (former director of British MI5 intelligence) expressed extreme scepticism about modern ideas of "identity" that goes back to this "root of trust" problem. She considered it foolish if not impossible to try reliably tying a biological identity to an authentication token.

Anyway, we take some meeting event, be it in-person or via a notary to be the inception or genesis node in a trust chain. Thereafter a chain of trusted events follows. Based on this idea of a shared secret, in each subsequent encounter we can be sure that we are talking to the same person even in their personal absence. In a sense the secret "stands-in" for their identity.

Notice we chose a random process for the secret, at the point of creation. Had Alice turned up for the meeting and said to Bob, "Here's my idea for a great shared secret…", Bob would have no way of knowing that Alice had not had prior instruction to use that secret, either by sublime influence or coercion.

This is why passwords should be chosen entirely by the user (authenticator) with the least number of constraints. NIST have come around to formally requiring that which overturns about 40 years of password madness.

One of the events in subsequent transactions may be that the parties decide to change the keys. This "credential reset" (or key rotation) is a weakness in all logical security systems. The alternative, that the secret is immutable, leads to other problems. We'll examine both cases shortly.

With regard to newest NIST standards, verifiers shall not impose password rotation. This reduces an attack surface where a weakness in the change protocol continually exposes the parties to risk.

What NIST does not address is the case where you want to change your password for whatever reason. Back to our principle of symmetric power-relations in security, both parties should have an equal right to request a change. There are many reasons people may want to change their password, and should be able to set their own standards for security. NIST seems to place a problematic asymmetrical onus that "verifiers SHALL force a change if there is evidence of compromise of the authenticator". But what about in all other cases? A too-literal interpretation of this may lead to regimes where the authenticator is refused request to change a password!

A key weakness in all systems lies at the inception of the root, because someone may claim an "identity" falsely. Notice we said "to the extent Alice and Bob already know each other". There are famous cases of people turning up to key signing meetings bearing a fake driving license, passport or other identity token, and gaining trust. The problem here is that the principals don't actually know each other at all well. If there is a weakness in the initial key exchange when logical identity is established, then all transactions thereafter are rotten.

Back to the source

If we go back to the roots of epistemological philosophy we can ask;

"What is the only thing I can be sure of?"

Philosophers like Berkeley and Hume thought about the self-evident truth of seeing our own hands before our eyes or looking in a mirror. Descartes went for a deeper foundation in saying that the only thing we can really be sure of is ourselves and more specifically our own conscious mind - "I think therefore I am" (Latin: cogito ergo sum). In other words, the only sure root of trust in any matter of human affairs is ourselves. Another word for this is "self-reliance".

The next crucial question to ask is:

"Who is the security for?"

Remember, there is no such thing as bare "security" as an abstract noun. It is relational and always for some one/thing , from some one/thing, and to some definite end/purpose.

Discussion of security often starts with the example of a bank. This is not a good starting point because giving instructions to a bank acting as an agent is actually quite complex.

Rather, as a base case, consider a locked box in which I keep a valuable diamond. The box is in an accessible public place, but is otherwise impenetrable. So others won't break into the box it's secured by a password (maybe a "combination" or number). Only this information allows access to the box.

The person who sets the password is me. I can change the password if I already have access to the box.

Here are some known important properties:

*The security is for me
*It's security from all other persons
*The purpose is to protect the contents from theft
*The security is defined entirely by me
*Only I know the password
*Entering the password is atomic with opening the box (no state is stored or observable)

There are a number of weaknesses:

  • the design of the box may be faulted
  • someone may observe me entering the password
  • someone may guess or brute-force the password
  • someone may beat, bribe, trick or threaten me into revealing the password

Mitigations against these are to:

  • understand the design of the box
  • make password entry invisible
  • make the password large relative to the access time to the lock

Password based systems often add a few "armour" features like:

  • a limited number of tries
  • exponential-backoff time between unsuccesful tries

However there rather few good mitigations against the final set of weaknesses, which are weaknesses in me, the person for whom the security operates. This is the self-reliance criteria.

Furthermore, anyone else who handles, maintains, or otherwise interacts with the box is a threat. If I do not own the box and have exclusive access to it then risks may arise.

This is an idealised security model. In it I have perfect knowledge of the security mechanism, sole knowledge of the secret and exclusive use of the channel to interact with the box. In practice all other shared-secret based security systems will be weaker than this. Other parties may have a stake. Knowledge may be incomplete.

Lock-out

Imagine that you come home late one night to discover, as you get to your house, that you lost your keys. You didn't trust a neighbour to keep a spare set, so you phone a locksmith.

"Sorry", says the locksmith, "but your keys and locks are high-security. I can only make you a spare key if you show me proof that you live there."

"Sure", you say, and then realise that all your documents are locked in the house. In fact the locks are so good and the house so strong that you're locked out for ever, now homeless. The house has to be demolished because nobody can get in.

A paradox of security is that we don't always want it to be too good. At the same time, if there are any known back-doors, someone will discover and exploit it - always and without fail.

Avoiding lock-out affects so many areas of technology. Manufacturers of IoT ("Internet of Things") devices want to make sure that customers who inevitably lose passwords for devices can get back into their toys.

People forget passwords. This is the reason that "password recovery" schemes exist. Most of them are terribly flawed. Some organisations will simply send you the shared secret in a plain-text email. Others will presume to "reset" your password to something random, which they send by email. A cheeky impostor can deny you service by filing a fake reset request. This causes an email address, which is not very secure - and may itself be lost or hijacked - to become a proxy identifier used for authentication. Sometimes a telephone number is used as a trusted identity. All these schemes lead to a tangled web of over-intrusive and infuriating make-work which place the onus on you to prove you are the owner of information that you have forgotten.

The best way to not lose passwords is to use an encrypted password wallet which is backed up offline. Beware, not all "password managers" are created equal and some may introduce new security risks where they interact with other applications like web browsers.

Despite the NIST recommendations that the authenticator should not forced to change passwords, that doesn't mean you shouldn't choose to. Six monthly or yearly is a reasonable period for most risk profiles. Beware that this is the time you are most vulnerable to a password loss or confusion. Don't delete old entries immediately. Test new credentials and get into the habit of remembering them before updating your wallet permanently.

For IoT, most people never change default (or backdoor) passwords and so hostile hackers can easily take over devices. Recent regulation in Europe bans defaults, but can have the terrible side effect of "bricking" badly designed devices leading to large amounts of e-waste (WEEE) and killing any re-use of devices via a second-hand market. All very bad for the environment.

Password reset mechanisms are therefore an important area to understand in cybersecurity. For IoT and physical devices a simple button is the best way. This should give a short time window in which all access controls are removed so a user can get in and set up new credentials. In these cases it is essential that the user (owner) initiates the cycle and that the vendor does not store anything. The vendor is obviously a much bigger and attractive target for compromise.

But when the system is remote it's more difficult for it to know that the user requesting a reset is really who they claim. This leads to elaborate multi-factor schemes like using an email address as a root of identity. This in turn creates privacy problems and forces services that could otherwise operate anonymously (and therefore more securely) to store and potentially leak private info.

Duress

We can deal with coercion to a certain extent by modifying the mechanism to have a fail-safe - a second password that will lock the box forever, or self-destruct and so on. If coerced to open the box Alice can give a fail-safe password and thwart the attackers. That may not stop her being hurt or threatened further unless it somehow makes clear that the fail-safe has been triggered and the game is up, at least on getting the contents.

Another possibility with cryptography is that box contains several "compartments" with different keys. Each compartment appears to be the size of the entire box but only one key opens the box containing the diamond while other sacrificial keys apparently open the box, but inside are worthless items. This creates a deniable system.

Phishing

Other than lock-out there's another serious problem with passwords. While threats and beatings are very rare, and can be thwarted by deniable encryption, the main weakness in password based systems are getting tricked. The prevalence of social engineering attacks that trick people into giving up their passwords is a big problem today.

Suppose that while Alice is away some thieves replace her safety deposit box with an identical one, but the keypad is really a radio that signals the entered password to the thieves who have taken the real box elsewhere. As soon as she enters her password into the fake box the thieves can open her real deposit box.

Phishing (and similar variations; smishing, spear phishing etc) trick someone into giving up their password by leading them, by a link or fake phone number, to a system that seems like their familiar stronghold, but is really a facsimile. This could be a fake website that looks like your real bank, for example.

An operational defence is to regularly enter the wrong password a few times before entering the real one. Sadly people are afraid to do this since many security systems will lock them out and there is no easy way to tactically signal intent to a system except by a "duress" password that we'll talk about in a moment.

If the system you are talking to is authentic you'll get an expected "access denied" response. If the system is fake you'll apparently get access using the wrong password. The fake system doesn't know better. At that point you know something is amiss.

However, attackers get around this by simply passing any credentials you enter on to your real account. You are actually interacting with your actual stronghold via a midpoint attack that silently snoops on your communications. Any time later the attackers can go to your safe and enter credentials they harvested. A way round this is to never use the same password twice, which we'll now consider.

One time passwords

Let's suppose every time Alice enters her system she changes the password. This offers some limited security against anyone who spies only on her keypad when entering the password. If they try to use the same password again it will fail. But this gives no protection against an attacker who spies on the whole transaction, and every keystroke that Alice makes. A pernicious problem is attacks by insiders. For example Facebook store users passwords in plaintext so that a rogue employee could abuse someones account.

If she changes her password anyone already inside the system will simply see the new password when she enters it. As we discussed earlier, procedures for changing a password often expose a temporary weakness, so it would seem changing the password too often isn't a good idea. NIST agree and they state that from now on "verifiers and CSPs SHALL NOT require users to change passwords periodically."

How do we get the advantage of regularly changing passwords without the downsides? Instead of changing the password manually each time Alice configures her system to have a sequence of passwords, set-up in advance. After each is used it stops working and the system expects the net password in the list. For example she would use "m0nd4y", "tu3sd4y", "w3dne5day" etc on successive days.

Computers allow an almost endless list of passwords to be generated and for a system inside her safety deposit box to know which to expect next. This method uses synchronised pseudo random number generators.

If Alice has another, small and portable computer, she can use it (maybe combined with her fixed password) to generate a fresh "one time" password each time. Even if she gets phished or spied on, whatever the attackers learn will be useless because the next time the system will expect a fresh password from the sequence.

There are two related problems with this. Firstly it might be possible for the attackers to work out how the sequence of passwords is generated. In the obviously weak example above they would soon guess that "thur5d4y" and "fr1d4y" might be candidates. Also Alice's computer might get out of whack (de-synchronised) with the system in the box, and so lock her out.

But something else has happened. We've just moved the problem from "something you know" to "something you have". Alice no longer memorises the passwords. Instead she relies on a device to manage her password sequence. Thieves are now motivated to steal Alice's little computer, which in practice would be a smart-card or key-fob device. She now has something easily taken. What we want from passwords is to keep the password only in mind, never manifest as anything physical that can be observed, interrogated or reverse engineered.

Partial passwords and challenges

Another way of approaching the problem of evesdropping or MITM attacks is to have just one password kept only in Alice's head, but each time she goes to open the box it lights up a few numbers like "1, 7 and 12". These are the letters of the password that Alice must enter. For example, if her password was "undiscoverable" then letter 1 would be "u", letter 7 would be "o" and 12 would be "b". So Alice would enter "uob" as her access code. Each time the box offers a different challenge she must respond to. Any attackers would have to successfully phish Alice many times to have a chance at knowing the whole password.

A downside is that this places a demanding cognitive load on the authenticator. You need to mentally count through the letters. There is a temptation to write down the password. It is possible to use another scheme (for example Shamir's) in which Alice simply offers several self selected characters from the password. They must be in the correct order and chosen from a password that is very long.

Coupling and conflation

For "convenience" (the mortal nemesis of security), eventually all the functionality tends to concentrate back in one place, like a smartphone, or a phone number and other ID that gives access to the phone. This in turn simply becomes a physical security token (something you own) which is a desirable target for theft.

Today, owning someones smartphone is akin to having their keyring, bank cards and password book, diary, phone book, all in one. The modern smartphone is a security disaster area! It is also unreasonable to suppose someone will always carry a personal device like a smartphone. They break regularly, Batteries die, There may be no connection. For health, productivity and environmental reasons smart people are increasingly turning away from smartphones and returning to simpler voice-only devices.

Numerous problems accrue. Obviously this concentration of function actually reduces security. It creates high value targets. Secondly the complexity leads to more frequent lock-out or even deadlock situations in which the person cannot recover access to their digital life. Thirdly, no matter how many channels there are, one must remain the primary root of trust from which all others can be reset. The idea of multi-factor authentication is good, but many of its current implementations that rely on precarious technology are no good.

Single sign-on

Large organisations that run thousands of servers and have tens of thousands of employees cannot possibly manage access on a per machine basis. Imagine trying to remember passwords for each host! They use software like Lightweight Directory Access Protocol (LDAP) and Remote Access Dial-In User Service (RADIUS) to manage access. A user authenticates just once and gains authorisation to use services across the company, on many servers, managed centrally and in a granular fashion. This is sometimes called an "authentication server".

Of course we tell people never to reuse the same password for multiple sites and that centralising authentication creates a single point of weakness. In many ways this is exactly what they're doing with single sign-on. An enemy who has successfully compromised an authentication server can access all of the sites it secures.

This kind of organisational management has migrated onto the general web. Nowadays many web-sites say things like "Sign In with Facebook" and so on. Web single sign-on (SSO) is designed for convenience rather than security. It also causes a serious privacy issue. Each of the sites can use browser cookies or other means to know which other websites you've visited. SSO allows detailed tracking. If you value security and privacy don't use this.

In reality most people only access a couple of dozen websites regularly. Too many sites needlessly ask people to "sign in" to use them. They do this to obtain tracking information to sell to advertisers. This creates a security risk to users. Every unnecessary credential creates potential for phishing. It also overwhelms users with an absurd number of passwords to remember.

Single sign-on systems leak information about websites visited via tracking, even if the user does not log in to those sites. This tangles-up security with privacy in a very unhelpful way and puts forward a faux Faustian bargain. It asserts that in order to be secure you need to give up more privacy. This is nonsense. It's possible to design secure systems that perfectly preserve privacy, but big-tech vendors prefer to muddy the waters as that suits their data harvesting projects.

Function creep and MSPs

Many schemes that outsource secret management are no different than writing down a password and entrusting a partner to keep it in a safe hidden place. Of course this can be done in a "blind" way, so that the trustee (proxy verifier) has no idea what the credentials are for. They may be lightly encrypted/obfuscated.

But the company Meta were recently fined 100 million for keeping 600 million Facebook and Instagram passwords stored in a totally insecure way. The security track record of Microsoft, Google or Amazon is no better, is very unlikely to improve, and you should not outsource your security to such entities.

Companies that manage authentication secrets this way are put in an extremely powerful position. As soon as I ask someone to keep a secret for me, it's not a secret any longer. In other words, two people can keep a secret so long as one of them is dead.

What all of this boils down to is a self-disproving statement: "People are bad at keeping secrets so they need other people to keep them for them". In other words "security as a service" is something of a misnomer. At best it's a way of unloading some cognitive effort and technical resources while getting scale advantages and perhaps more consistent professionally managed security.

Managed services save companies money and save individuals from having to set up and configure software to help them manage many accounts. That works well for larger organisations. At worse it can be a protection racket that harvests personal data and as a side-effect provides weaker security than people can obtain by themselves with basic tools.

Earlier we mentioned the problems with biometrics, perhaps the worst being that they leak information about the user which impacts on their privacy and dignity. In the age of surveillance capitalism it suits vendors to ask users to maintain elaborate security regimes that as a side-effect leak personal information. Sometimes it's questionable whether components of over-engineered security systems do anything useful at all other than harvest data, which they sometimes hide behind "accountability" requirements.

An example is "geofenced" security that presumes the user must only be in certain locations. Not only is this rather trivially spoofed it often requires the user to carry an active GPS enabled device or otherwise leak their location to the system. That may in itself be a security risk as illustrated by the Strava fitness app which exposed US military bases. It introduces yet another path for system failure, if GPS signal fails. These systems tend to be less convenient in that they demand more cognitive effort and memory.

Complexity explosion

It is often said that computer users are in a constant battle against complexity.

Why is it not working today?

Because the DNS service went down? Because GPS signal is weak? Because I wasn't quick enough to respond to an SMS message? Because Snapchat or Google or some other private service decided to suspend my account today? Because my fingerprint is dirty? Because I'm wearing sunglasses? Because my voice sounds stressed - having to deal with all this infernal security jiggerypokery?

Modernity has a serious complexity problem. Much of the dancing around security is making this worse in every way.

There is a paradox at the heart of all this. Security really is complexity. We use complexity as a weapon against the attacker. We devise a system that is too complex for the attacker to understand or guess, but it something that we understand. That means our knowledge/understanding is the primary asset we have. But so often people try to design systems that take this away from us.

Modern "consumer" culture impacts this in several insidious ways:

Firstly convenience. Everyone wants to sell us convenience because too much technology competes for our time and attention. But security and convenience are incompatible. People hate complexity and love convenience. But that is another way of saying we want to shrug security.

Secondly "surveillance capitalism" has emerged as an economic force. This is really a failure of law and rights and is another way of saying that criminal enterprises have become normalised.

Thirdly "monopoly" concentrates too much power in too few places. Centralisation and security are incompatible. A wise person puts their eggs in many baskets. Distribution is why multi-factor authentication works.

All of this creates intractable complexity - a deranged circus of mutually antagonistic interests. In this struggle your security comes as last priority. So when a company loses your data and says:

"Your security is our priority"

..they are straight-up lying their flaming pants off. Even when governments fine them billions of dollars your security is never a company's priority given all their economic interests and market forces surrounding the situation. It's time we all acknowledged this more openly.

Real software engineering is huge compromise. Software is almost never designed nowadays. It evolves and emerges from a struggle between many stakeholders; front-end developers versus back-end developers, UX designers, accessibility compliance teams, marketing, user-tracking, search engine optimisers, publishers and distributors… security rarely even makes the top 10 in the list.

We must acknowledge that unnecessary complexity created by security systems is also an enemy. It defeats everything including good security itself. And that is the real issue people have had with passwords in the first place. They say "I just cannot remember them all".

The best possible system is simple. It aligns privacy, security and self-reliance. Self-reliance is important because it reduces external dependency and it makes a security scheme more resilient.

Memory and passwords

For decades, almost all schemes for using passwords have been done badly. You should use long pass-phrases with high semantic entropy but a good mnemonic structure and longevity. Password "rules" enforced by verifiers have made this impossible, until this year when NIST decided enough was enough.

Corporate policies for passwords are almost always asinine, illogical and based on outdated superstitious folklore. Ross Anderson mocks the typical password policy frameworks as disingenuous busywork. They are often based on ass-covering (CYA) liability-dodging ideas that make password security worse by encouraging users to write down unmemorable passwords or recycle old ones according to some forced mnemonic scheme that a good cracker will easily spot. A mathematical analysis of passwords shows that "length is everything", with vanishingly small gains for using strange characters (alphabet size). Randomly generated passwords are only of use as temporary induction tokens and should be replaced immediately by memorable ones of sufficient length.

What is remarkable is the power of the human mind when able to employ associative and creative methods. We can actually recall large numbers of very long passwords and tend to use holistic and sometimes unconscious mechanisms for recall. Personally meaningful phrases modified from obscure poems, musical lyrics, childhood friends, can be used with clues and context appropriate reminders to make passphrases that are impossible to crack with any dictionary, any amount of computing power or "AI" assistance. Mutuality, power and security**

Mutual authentication is easy to understand. It's security that works both ways.

Think about your relation with your bank. In the unlikely case they deign to answer a telephone and you ever get to speak to a human, who seems "in charge"? They ask you security questions and talk down to you like you are probably a criminal. But this seems ass-backwards. It's your money isn't it?

This is the most common type-2 security model. It is "vicarious" security, or security "for your own good". In the TV series "Mr. Robot", Elliot asks his accomplice Tyrell to shoot him if he ever betrays the F-Society plan. Peter Sellers' character Inspector Clouseau hires Cato (Burt Kwouk) to assault him at random. We use vicarious security a lot without realising it. We make a social contract with "the State", giving it a sole legitimacy to use violence, even against ourselves, in return for peace and order.

The problem with vicarious security is that we sometimes forget who is the boss. It is the initiator of the contract. It is you because you gave the bank your money and said "please protect it". The bank did so in consideration of loaning out your money for interest. If the security provisions of banks don't meet your needs then its time to change them, and if necessary set up your own bank as Dave Fishwick did.

In fact most of the problems in security are not that the customer is inauthentic, but that the bank is! "Identity theft" is a made up word, because you cannot have your identity stolen. It is a mixture of two crimes, impersonation by the fraudster and criminal negligence on the part of the verifier.

This is the root of phishing. It's what organisations like NCSC spend huge effort trying to educate people about. At Boudica we're always telling people "never respond if an official organisation or bank calls you". Instead, call them back on a published number. But even a telephone number is no surety. Hacking of the SS7 telephone routing system is rife and anyone who can MITM your voice network can take on the number identity of a trusted entity.

So what's the answer?

Mutual authentication has been used by spies forever. It's a pair of passwords. In spy films the characters will say something mysterious like "The badger is in his lair", and the other responds "The eagle flies tonight". In reality that would get them both exposed by a passer-by and shot, so real spies actually use more innocuous codes like "nice hat". Anyway, hopefully you can see this works as a kind of challenge response. Indeed the "conversation" (protocol) can go to several rounds of depth that allow Alice and Bob to mutually authenticate each other.

Some organisations have started to experiment with this again. Has your bank given you a password that they expect you to ask them for? Probably not, because banks tend to be arrogant and see authenticating themselves to customers as beneath them. On the face of it the bank is more powerful. It has your money. It has lots of money. It has more sophisticated computer systems than you do.

These are all reasons why you should not trust them (or something that purports to be them) - because security should be for the weak against the strong - not the other way about! Power plays a central role in security which is often overlooked.

Spies use mutual authentication because they are both vulnerable. Either could defect and burn the other, and either would be harmed by making an identity mistake. But your bank has nothing to lose and you have everything to lose.

Memory and passwords

For decades, almost all schemes for using passwords have been done badly. You should use long pass-phrases with high semantic entropy but a good mnemonic structure and longevity. Password "rules" enforced by verifiers have made this impossible, until this year when NIST decided enough was enough.

Corporate policies for passwords are almost always asinine, illogical and based on outdated superstitious folklore. Ross Anderson mocks the typical password policy frameworks as disingenuous busywork. They are often based on ass-covering (CYA) liability-dodging ideas that make password security worse by encouraging users to write down unmemorable passwords or recycle old ones according to some forced mnemonic scheme that a good cracker will easily spot. A mathematical analysis of passwords shows that "length is everything", with vanishingly small gains for using strange characters (alphabet size). Randomly generated passwords are only of use as temporary induction tokens and should be replaced immediately by memorable ones of sufficient length.

What is remarkable is the power of the human mind when able to employ associative and creative methods. We can actually recall large numbers of very long passwords and tend to use holistic and sometimes unconscious mechanisms for recall. Personally meaningful phrases modified from obscure poems, musical lyrics, childhood friends, can be used with clues and context appropriate reminders to make passphrases that are impossible to crack with any dictionary, any amount of computing power or "AI" assistance.

Memory and passwords

For decades, almost all schemes for using passwords have been done badly. You should use long pass-phrases with high semantic entropy but a good mnemonic structure and longevity. Password "rules" enforced by verifiers have made this impossible, until this year when NIST decided enough was enough.

Corporate policies for passwords are almost always asinine, illogical and based on outdated superstitious folklore. Ross Anderson mocks the typical password policy frameworks as disingenuous busywork. They are often based on ass-covering (CYA) liability-dodging ideas that make password security worse by encouraging users to write down unmemorable passwords or recycle old ones according to some forced mnemonic scheme that a good cracker will easily spot. A mathematical analysis of passwords shows that "length is everything", with vanishingly small gains for using strange characters (alphabet size). Randomly generated passwords are only of use as temporary induction tokens and should be replaced immediately by memorable ones of sufficient length.

What is remarkable is the power of the human mind when able to employ associative and creative methods. We can actually recall large numbers of very long passwords and tend to use holistic and sometimes unconscious mechanisms for recall. Personally meaningful phrases modified from obscure poems, musical lyrics, childhood friends, can be used with clues and context appropriate reminders to make passphrases that are impossible to crack with any dictionary, any amount of computing power or "AI" assistance.

Conclusions

We've looked at a few core concepts in authentication and seen where passwords fit into that, and why they are a special method. Passwords (something only you know) cost nothing to implement. They are robust and flexible so long as limits are placed on the cognitive load of the authenticator and arbitrary constraints are removed so that passwords are chosen and managed by the end user.

Secret knowledge is the ultimate root of trust in all practical systems. Although this annoys a commercial (in)"security" industry that wants to conflate biological identity with authenticity and foist its Procrustean solutions upon everyone. So long as we follow proper operational security principles then simple, passphrase based security methods are as good as any other system. Hopefully the new NIST standards go some way to restoring password based security to its rightful position as King of practical security methods.