Page MenuHomePhabricator

It is possible to use the same 2FA token more than once
Closed, WontfixPublic


There are two ways to use the same 2FA token:

  • log in, then log out, then log in within the allowable time period for a given 2FA token
  • log in, then enter "high security mode"

In typical 2FA/TOTP setups the token is invalidated once used, even if the time period for the validity of the token has not passed.

Event Timeline

eadler updated the task description. (Show Details)
eadler added a project: Auth.
eadler added a subscriber: eadler.

Is there a specific security purpose behind this? That is, what attack is this measure aimed at defusing?

Offhand, the best attack I can come up with is "use a spyglass to observe the user enter the MFA code", although you can still do this if we invalidate the code if you can type fast enough and beat their request.

It could also decrease the time required to get a high-security session if you have full read access to the network, but I'm not sure how material that is.

I think this is more to conform with what everyone else does, I know Google and Apple don't let you re-use tokens even if they are still in the valid timeframe, not sure if there are specific reasons behind that besides the attack you came up with above.

I'd like to understand the reason behind this before moving forward.

  • If it's a real security issue, it qualifies for a security bounty.
  • If it's a real security issue, we'd like to have seen it reported via HackerOne, so maybe our documentation or process are deficient.
  • There may be better ways to defuse whatever attack this is aimed to prevent. For example, if this only defuses attackers looking over your shoulder, we could make the input a password field.
  • We may want to make different tradeoffs than other providers. For example, it is common for some providers to enforce complex password requirements (3 digits and a prime number of uppercase letters) or autocomplete="off" on login forms, but we do not.
  • Even use by Google does not make something a best practice: concretely, major providers were vulnerable to a significant MFA bypass in May of 2014.
  • We might be vulnerable to similar attacks in other places, or make ourselves vulnerable in the future. We can't reason about vulnerabilities without understanding the attack.

So I just did some googling and came across the RFC for MFA, and this is
taken from the RFC:

Note that a prover may send the same OTP inside a given time-step window
multiple times to a verifier. The verifier MUST NOT accept the second
attempt of the OTP after the successful validation has been issued for the
first OTP, which ensures one-time only use of an OTP.

I think the RFC should be followed.

I think the RFC should be followed.

I'd like to understand why the RFC makes that a requirement.

RFCs are not perfect either. For example, I believe we deviate from the OAuth2 RFC to improve security. Even if this requirement is well-justified, following it without understanding it doesn't really satisfy any of the points I made above. We may still be vulnerable to whatever attack it is intended to prevent if we implement the recommendation without understanding why the recommendation exists.

I filed this as a correctness bug, not as a security bug. There are some attacks that could be performed on such MFA schemes but that are not applicable in phabricator's security model.

There are some attacks that could be performed on such MFA schemes but that are not applicable in phabricator's security model.

Can you describe such an attack, or point me at a description of such an attack?

I can't come up with a reasonable system offhand where this attack is an interesting one.

Now that I think about this for a bit more there is a security concern here: In given two-factor authentication context, an attacker could MITM the connection between the verifier and prover, obtain the authentication credentials & a single OTP value, and then log in with these credentials within the current time step. This would allow for a second authenticated login, in sudo mode. I initially didn't think of this concern since the attacker can race, but in this case the new connection is created silently.

In particular this allows someone with the following privileges:

READ access to the victim's network or computer
HTTP (i.e., WRITE) access to the phabricator installation

to upgrade to the victim's credentials silently.

could MITM the connection between the verifier and prover

If they do this, they can just read the session cookies. Sure, having a high-security session is technically more powerful than only having a normal session cookie, but I don't think it's a very meaningful distinction. And you'll get a high-security session sooner or later, you just need to wait for someone to establish one.

It does let them establish a high-security session sooner upon observing an MFA login instead of needing to wait for a user to establish one themselves, which is what I meant by this:

It could also decrease the time required to get a high-security session if you have full read access to the network, but I'm not sure how material that is.

But this just doesn't seem like a meaningful escalation to me. For this to matter, a network has to be insecure enough that it can be MITM'd, but so well-protected that the attacker is racing against time and only has a small MITM window before they are discovered? What conceivable network has these properties?

(HTTP over public wireless has those properties, but using MFA and not using HTTPS is silly and the lack of HTTPS is a far greater problem than the larger MFA vulnerability window.)

Not sure if this is a possible/practical scenario... but

  1. Get some tools that allows to see keystroke of victim
  2. Know the credentials for username/password auth
  3. When user types the 2FA credentials, reuse it immediately.
  4. ...
  5. PROFIT!

It's really a timing attack and depends on many factors (how fast keylogger is, how fast malicious user can retype 2FA and how long time has left 2FA to expire...) so I'm not sure if it's practical or not. I'm just saying my suddenly imagined scenarios.

I for-see what @revi stated above as a pretty plausible scenario, even the looking over someones shoulder is a plausible scenario.

If the restriction of OTP was enforced, I don't see any real downside (besides that you may have to wait up to 30 seconds to get a new token if you login multiple times?) but it would provide some extra security for the above and possible other unknown scenarios.

epriestley triaged this task as Wishlist priority.Nov 16 2015, 6:44 PM
epriestley added a project: Security.

If you have long-term local access to a user's machine and can install a keylogger, you can just steal the session cookie. Being able to establish a second session instead of stealing the first session cookie is not a meaningful escalation.

If the underlying attack here is "look over shoulder / use a spyglass to observe key entry, then quickly enter the same key", using a password input instead of a text input would be a more effective deterrent: it also prevents the attacker from racing against the user to submit the request.

If the underlying attack here is "durably compromise the user's computer and/or network, then escalate to high-security slightly faster than you otherwise could", it seems like this only delays the inevitable.

This does allow meaningful escalation in situations where an attacker has substantial control but only temporarily: if you MFA on unsecured public wireless which is being observed, or MFA on a public terminal which has been compromised -- but do not escalate that particular session to high security -- this measure would prevent attackers from establishing a high-security session. But better yet is using HTTPS and not entering credentials into attacker-controlled devices.

Not really related, but I think the spec allows us to be compliant by assigning users consecutive, incremental secrets and exposing them publicly:

R6: The keys SHOULD be randomly generated or derived using key derivation algorithms.
R7: The keys MAY be stored in a tamper-resistant device and SHOULD be protected against unauthorized access and usage.

...although R7 seems contradicted later:

The key store MUST be in a secure area, to avoid, as much as possible, direct attack on the validation system and secrets database.

See and for some discussion about this issue (also discussed a bit on oss-security mailing list as

The simplest fix seems to be to keep the timestamp of the last successful TOTP auth and ensure it can't be used again.

This is technically easy for us to implement, I'd just like a real security justification for it. From the first link, the justification seems to be:

This is apparently to detect MITM and over the shoulder attacks.

The second link provides similar justifications:

an attacker could Man-in-The-Middle the connection ... Alternatively, an attacker could “shoulder surf” the victim’s second factor device

The third link also provides similar justifications:

Type of Attack:

  • Man in The Middle
  • Shoulder Surfing

I don't see any additional discussion of these threat scenarios in any of those links -- let me know if I missed something.

I don't understand how this protects against MITM in the context of a web application. If you can observe the code over the wire, can't you just observe all the subsequent traffic, the session key, etc? This protection makes sense in the theoretical general case of all possible applications of MFA, so I can understand why the spec might want to protect against it, but I don't understand how you can MITM MFA submission without also compromising the entire channel in the case of a web application. If you can read the MFA code, you can just read the session token or any other content instead.

I also don't understand how this really protects against shoulder surfing. Here's a protocol the attacker can follow to shoulder surf a code even with this mechanism in place:

  • Ahead of time, sabotage the network so they have an on/off switch for your machine's connection.
  • Observe your screen from afar using a spyglass.
  • When the TOTP challenge dialog appears, turn your network off.
  • Wait for you to enter the TOTP code. You try to submit it but fail because the network is off.
  • Enter that code themselves. They're the first to submit it, so they get session access.
  • Wait 30 seconds.
  • Turn your network back on.

Is there a realistic threat scenario where an attacker can observe code entry but can not disable your network? I can't come up with one offhand, and disrupting networks generally seems easier to me than observing screens: it's a capability I would assume anyone who can observe the screen also has.

I can imagine a more complex "airlock" protocol which defeats this attack (or at least raises the barrier to executing it) by locking codes out when we issue a challenge, but I worry that this mitigation is generally not being thought about carefully or adequately justified -- all discussion of it seems to be just referencing the spec and not explaining what scenarios we're defusing in detail. I don't see any consideration of how this mitigation interacts with network disruption. The attack above (combining observation and network interruption) seems realistic to me, and seems to make this mitigation seem a bit more like security theater rather than real security.

Basically, if you want to see this implemented, please describe an attack scenario you're concerned about in detail.

We can discuss whether the attack is possible or realistic, and whether preventing code reuse is a good component of mitigation or not. I do not plan to implement reuse prevention in the absence of a description of an attack which compellingly justifies it as a response measure. Every feature we add to a security system should serve a specific security goal, and "spec compliance" is not a worthy security goal (and often at odds with real security).

I don't think preventing code reuse meaningfully mitigates any of the referenced attacks so far, although none have really been described in detail. It is certainly possible I am misunderstanding the attacks, or not thinking about them carefully or thoroughly enough. You can convince me that this feature is worth implementing by describing a specific attack which this feature represents a sound technical response to, in the context of Phabricator.

epriestley claimed this task.

After the stack of changes under D19897 land:

  1. When we issue a challenge, only the specific session we challenged may respond to the challenge. You're free to tweet "I'm entering my MFA code right now, it's 123456" every time you provide MFA without creating any kind of meaningful risk (don't do this, of course) because attackers observing the code can't use it unless they control your session already.
  2. When we issue a challenge, the response may only be used to authorize the workflow the challenge was issued for. If we prompt you for MFA to "award a token", the code is locked to that workflow and can not be used to answer a prompt for another action (like "launch missiles"). (But note that, today, almost all existing MFA checks are part of a "legacy" workflow so this barrier is not meaningful in all cases.) This defuses the highly practical and extremely common "spill coffee" attack described in D19889.
  3. Responses may still be reused by the same session on the same workflow as long as the MFA prompt has not been "completed". This primarily arises when you have several different MFA factors and submit correct responses to one or more, but fewer than all of them. We validate factors individually, and do not require you to re-submit responses for the factors you answered correctly as long as you complete the workflow in a short period of time (for TOTP, 60 seconds).
  4. In the future, the ability to reuse a response in the same session and workflow may be extended to cover a larger set of errors than "you submitted one or more other factors simultaneously and some did not validate". The most likely error case we would recover from without re-prompting for MFA is unique key collisions when trying to apply changes to objects.

Because of (3) and (4), our implementation does not comply with the RFC and I don't plan to make our implementation compliant. I believe this implementation is dramatically more secure against all attacks and slightly more user-friendly than a minimal, compliant implementation would be.