On the 14th and 15th of July, we had the OAuth Security Workshop 2016 at the University of Trier. Further, we had a IETF 96 side meeting on OAuth security at 18:20 in the beautiful Café am Neuen See to further discuss it. Below is the summery of what I got as the take away of these meetings.
Characterizing the attack patterns
Since late last year, there has been bunch of security issues reported around RFC6749 and RFC6750. IdP Mix-up attack reported by University of Trier researchers 1 are one such, Hans an John came up with a leaked log access, and I wrote in a blog post about the “code phishing attack”. They are different attacks but share a same characteristics: all of these attacks aim at the extraction of the authorization code. RFC6819 categorizes such as “Code leakage”. I call it “code extraction”, which is the “copy” part of “copy-n-paste attack”.
The code extraction by itself is not very useful. It has to be used to obtain access token (and refresh token where applicable) to gain access to the resource. There are multiple methods to do it, but all of these can be characterized as “Code insertion”, which is the “paste” part of “copy-n-paste” attack.
Now, these attacks were around the authorization code but “code” is not only the parameter that can be extracted or inserted. The redirect_uri can also be extracted and inserted, and so is the “state” and other parameters. So they can be expanded to form
- “Parameter extraction”; and
- “Parameter insertion”
Andrey Labunets of facebook characterised these in his presentaion at the OAuth Security Workshop 2016 2 as
- Containment failure; and
- Authentication failure.
Parameters extraction can be done in several ways:
- Eavesdropping the channel (e.g., man-in-the-browser);
- Message destination change (e.g., “code phishing”, “Claiming the same scheme”):
- Sever log compromise.
Parameter insertion can also be done in several ways:
- Pasting the parameter into the attacker’s browser;
Much of these attacks are directed towards the “front channel” communication that goes through the browser because the channel is not protected. All the parameters passed through the browser are potentially tainted. We all know that trusting a potentially tainted variable is rather dumb. They have to be verified before using.
For example, redirect_uri can be checked by saving the requested redirect_uri in the session and by checking the call back URI against that value. 3 However, you cannot check the “code” and other variables that are generated on the server this way.
Fundamentally, the root cause of the parameter insertion is the fact that in the front channel 4 we do not
- authenticate the message sender;
- authenticate the message receiver;
- integrity protect the message;
- encrypt the message;
Similarly, what makes the extraction possible are the fact that we do not
- identify and declare the involved actors at the outset and check (intent mismatch);
- declare the entity roles (protocol endpoints) at the outset and check (intent mismatch);
- declare the protocol variant;
- authenticate the message receiver;
- integrity protect the message (which necessarily authenticate the sender);
- encrypt the message;
The extraction is also possible from the server log and database, but the protection of such is out of scope for the protocol itself, though a best practice document should warn about these.
Fixing the protocol once and for all
Given the above, the way to fix the protocol seems to be:
- Create the list of actors 5;
- Create the list of endpoint that the series of messages in the protocol goes through and include it in the authorization message (either by value or by reference);
- Clearly state the protocol variant and the message type/number;
- Sign the message for the source authentication and message integrity protection;
This essentially removes the attack surface 6 . You could further strengthen the containment by
- Encrypting the message.
There are facilities to do it. For the authorization request, OAuth JAR can be used to do most of these except the listing of all of the actors and endpoints. They have to be added. For the authorization response, we could potentially use ID Token of OpenID Connect.
Should we mandate it?
So, we can create a new protocol that is much more secure than RFC6749 and RFC6750 this way. Should we mandate it once it is done?
My answer is no. 7
Security is a risk control measure. We always have to balance the cost and the benefit. For protecting a resource with low value, current RFC6749 and RFC6750 with an appropriate constraint should be good enough. It will not justify the higher overhead cost. 8
For protecting a resource whose value is higher than a certain level, e.g., the write access to the Financial API, then it would be more appropriate to use a modified protocol.
In the security sphere, one size does not fit all. We have to take “appropriate measures” instead.
So, my advise is this:
- Create a BCP that prevents many of the identified attacks without changing the current RFCs; and
- Create a Higher security version of the protocol, which fixes the known attacks as well as many of the unknown attacks once and for all, and provide it as an option for a higher risk scenarios.
- Fett, D., Kuesters, R., and G. Schmitz, “A Comprehensive Formal Security Analysis of OAuth 2.0”, arXiv 1601.01229v2, January 2016, <http://arxiv.org/abs/1601.01229v2/>
- Labunets, A.:Lessons from breaking and defending OAuth in practice, OAuth Security Workshop 2016 Proceeding (slides)
- Using different redirect_uri for each authorization server and doing this check will solve the mix-up attack that were originally reported.
- While the same sort of attack can be applied in the back channel, it requires to break the TLS/PKI through something like heartbleed to start with, so it is much more difficult. Thus, attacks are directed to the front channel.
- Note: there needs to be an identifier that identifies the authorization server. This is not defined in OAuth as the client_id is the authorization server specific and not globally unique. Similarly, the subject and the resource needs to be identified appropriately.
- It is in line with the secure authentication protocol best practice proposed by
Basin, D., Cremers, C., Meier, S.:Provably Repairing the ISO/IEC 9798 Standard for Entity Authentication. Journal of Computer Security – Security and Trust Principles archive Volume 21 Issue 6, 817-846 (2013) <https://www.cs.ox.ac.uk/people/cas.cremers/downloads/papers/BCM2012-iso9798.pdf>
- So was the Google identity team, Anthony Nadalin of Microsoft, Torsten Lodderstedt of Deutche Telecom, etc.
- I am not saying that social network is ok this way: Social network often can be one of the highest value thing depending on the usage pattern. We also need to document the best practice as a BCP.