Remove :js responder

-1 for while(true). Let’s consider all use cases

  1. JS templates are used with AJAX, thus the X-Requested_with header is sent and .xhr? is enough protection

  2. JS templates are also used with inline tags. Thus while(true) will both break on origin website and on attacker’s website. Not an option

  3. Since we can send additional header why bother with prepending while(true);

So .xhr? is most elegant way that covers most of attack vectors.

I really don’t understand why JSON-hijacking wasn’t solved the same way. while(true) is uglyish

Yes, it makes sense, because you're forced to use XHR anyway when you include a "while(1);" otherwise you won't be able to strip it out.

Maybe the reason was to protect against old browsers that didn't implement proper same-origin policy?

What do you mean? Hijacking issue was just same, browsers were same, and I recall XHR could add additional headers… so while(1) sounds like weird idea

So you say Egor if rails would raise an error when the request for a js format is not xhr? it would solve the security issue?

Please stop conflating the discovery of a security issue with the philosophical waxing about architecture. It’s not helping the case. As stated previously, responding with JS is not only a wonderful architectural pattern, it’s also not going anywhere. Not in a gem, not in a deprecation, not anywhere. We’ll fix the security issue, and Rails will continue to proudly champion the use of this great pattern.

Guess what, it won’t be the last security issue Rails ever has. Just like it won’t be the last security issue any piece of software ever has. But we need to level up as a community in our handling of these issues.

Frankly, I’m surprised that people are willing to hire Homakov for any work in the area given his reputation for irresponsible disclosure. Finding a legit security issues is a great services, but disregarding all security issue management protocols in their publication is doing a disservice to all who would otherwise benefit from the work.

Rails has had a codified security process for many years now. It’s available for all to read on Ruby on Rails — Security. Making a blog post on your personal site isn’t one of the channels listed as a responsible way of disclosing discoveries. Posting specific 0-day attack vectors against affected sites is not one either.

Making a public report over a holiday weekend, and then, when the response to the report doesn’t immediately follow the proposed solution (remove the feature), go off the reservation with specific attacks is just plain irresponsible. No two ways about it. It also goes to undermine any other recommendations or suggestions coming from said reporter.

So. Damage is already done for this issue. But lest it encourages others to act as irresponsibly as Homakov has done of this issue, I hope others take a broader approach for future issues. Report systemic or framework issues per the reporting instructions on Ruby on Rails — Security. Report specific application issues directly to application developers responsibly per their reporting instructions (see Basecamp: Security overview for the one we use at 37signals).

Presumably we’re all in the same boat here: Make Rails better and more secure. Let’s row like we mean that. The Rails security team (Michael Koziarski, Jeremy Kemper, and Aaron Patterson) has worked hard in the past to provide us with a good process, they’ve followed that process, and they deserve our thanks and support.

David, first I must say I admire both you and Homakov and I'd certainly hire him if I could afford it.

I believe what led him to create that post was exactly your reponse to his e-mail.

I agree he shouldn't have created this discussion publicly but you shouldn't have replied the way you did either. You should instead try to understand the problem first before saying it would go nowhere.

I believe this is what caused Homakov reaction.

Sincerely, Rodrigo.

Rodrigo, many great things around the word can be dangerous if used in the wrong way but there are great people who try to make the usage of tools secure so humanity can profit from them. I am not the fun of don’t use this or that because it can be dangerous. Let’s find a way to make it secure. That’s harder but better in the long run I think.

What email is that? I replied to Homakov on Twitter thanking him for the discovery and stating a clear intent to get the issue resolved. What I rejected outright was the knee-jerk reaction to remove the possibility of generating JS responses from Rails. This rejection was confirmed when it got clear that the motivation behind that specific mitigation strategy was also motivated by architectural opinions on what's dinosaur and what's not.

But again, even having this discussion here or on Twitter simply isn't the proper forum to discuss 0-day exploits. It's the reason we have a standardized security reporting and response protocol. It's why we go to great lengths to coordinate proper fixes across multiple versions of Rails, following the CVE process, and other responsible steps in the process.

To sidestep all that doesn't help anyone but Homakov in the short-term to build a reputation as a take-no-prisoners grey hatter. I question the business strategy of that long-term (imagine having a business dispute with Homakov after giving him access to your system -- yikes!).

Again, my opinion of the process is removed from the value of finding security holes. Of course finding and responsibly disclosing security holes is a good thing. I just wish that Homakov, and others who might be inspired by his tactics, would realize that there's plenty of gain to be had personally by subscribing to these time-tested practices.

As many have already mentioned, enforcing request.xhr? for js requests provides adequate protection. You can’t set custom headers when making a cross-domain request with a tag so checking for one ensures the request is local. That’s what request.xhr? does.

Here’s the solution we’re using: Prevent cross-origin js requests · GitHub

I could see wrapping that up into something convenient like protect_from_forgery and making it a default in Rails.

-Javan

37signals

Your first message in this thread. He even quoted the relevant part in his article:

"Not only are js.erb templates not deprecated, they are a great way to develop an application with Ajax responses where you can reuse your server-side templates. It's fine to explore ways to make this more secure by default, but the fundamental development style of returning JS as a response is both architecturally sound and an integral part of Rails. So that's not going anywhere.

So let's instead explore ways of having better escaping tools, warnings, or defaults that'll work with the wonders of RJS."

I understand you had the best of the intentions, but I wouldn't have started the discussion by ending it. By this time you were not yet aware on how applications were affected and if there was a sane way to fix this issue. If I were you, I'd first *ask* if there's an alternative way instead of deprecating its usage *before* deciding it wouldn't go anywhere.

I guess that was what pissed him off...

I agree with you that this is not the "correct" way of dealing with the situation, from an ethics perspective, but let's be honest: Homakov wouldn't probably be a known name if he hadn't put the attack to GitHub into practice. I'm not defending this approach to self marketing, but you can't tell us it isn't effective :wink:

Best, Rodrigo.

This fix seems great.

Homakov, what do you think? May fix the problem?

Cheers,

Gabriel Sobrinho

gabrielsobrinho.com

There’s a precise reason why I did so called “full disclosure”

I started writing an email to security@37singals, then i checked out the previous conversation we had, where i was asked about my link for Hall of Fame. I gave the link and after a month it wasn’t added. No, i don’t really care, but given some other details (it was the second RJS bug in basecamp, “RJS bug” overall wasn’t 0 day at all, your ultimate rejection of RJS removal) i somehow ended up writing a post with full disclosure.

I mostly follow responsible disclosure policies. But this is not my duty. Especially when i have couple of reasons to follow other policy.

I feel entirely comfortable with shooting down REMOVE THE FEATURE as the first response to any security reporting. Our response should be, as it was here -- as it always is, let's fix the issue. This is doubly so when security reportings are conflated with other architectural agendas, like "dinosaur feature removal quests".

As it turns out, there is a very reasonable proposal for fixing this as well: Check js requests for xhr? and add the xhr header to all js requests through the framework, so it'll work for GET as well. We'll get this proposal vetted, of course, but if that pans out, it would be an entirely invisible fix that requires zero adjustment of the architectural style. Just like I mentioned it would, and as Homakov rejected outright initially as a possibility.

Yes, there are many ways to make a buck, if you don't care about how you do it. I don't find that worthy of celebration.

I'm sorry that you weren't added to the hall of fame in a timely manner, and I'll check into what happened, but that hardly justifies this shotgun approach to disclosure. Had you also contacted the other application makers for which you provided 0-day attacks? Did they also slight you in some way that justified this treatment?

I'm just trying to figure out what the thought process is here, so we can calibrate our response for next time. If you get a response that a certain architectural style isn't going to be changed as you first suggest, should the community expect that you'll blast random applications with specific attack vectors on a 0-day basis?

And if the reasons you describe are enough to, in your mind, warrant the 0-day option, should potential clients expect that similar minor disputes or grievances might result in 0-day public releases from the code bases you have examined?

You're obviously skilled at finding security issues. That's a great skill to have, and it's valuable to the community when wielded with care. But you're not wielding it with care.

Homakov rejected outright initially as a possibility https://twitter.com/homakov/status/406466616897986560

Sorry for using “REMOVE THE FEATURE” wordings, now I see how it sounded. All i wanted is to fix security bug itself, throwing the feature away wasn’t my goal.

Homakov, what we all want is to fix security bugs. There isn't any justification for thinking that we aren't all in the same boat on that. Things get muddy when you hinge the reporting of a security issue to your personal assessment of what a good architectural pattern is, though.

So please. I implore you to change your ways. You can be a great part of this community and you can build a reputable security consulting business without resorting to these irresponsible tactics.

We will get this specific issue fixed shortly (as soon as the GET .js solution by adding xhr headers has been vetted), I'm not in the least concerned about that. I'm concerned that we'll be right back at square one next time you find an issue and you arm the internets with 0-day exploits against a random selection of applications before giving the makers a reasonable timeframe to defend themselves (one holiday weekend isn't it).

I think i may have a solution, if i am wrong please don’t shoot me :wink:

https://github.com/artellectual/transponder/blob/develop/RAILS_UJS_VULNERABILITY_FIX.md

I may have a possible solution, check it out (sorry if i double posted not sure if my previous post went through, as I have shitty internet)

Kontax on Heroku

Kontax on Github

https://github.com/artellectual/transponder/blob/develop/RAILS_UJS_VULNERABILITY_FIX.md

In this thread it has been repeated several times that .js endpoints via GET are a security breach. And that people should stick to JSON.

Let me make clear for the archives that is not generally the case. There are valid use cases for dynamically generated public JavaScript, for example when your application exposes a widget 3rd party clients request to have their DOM modified with content. Think Disqus. I have implemented centralized rating systems for hotel providers that work that way.

The potential problem happens when your JavaScript GET endpoint exposes sensitive/private data.

Now, since the former is a rare use-case compared to the latter, the XHR protection should probably be enabled by default, but you need still to be able to opt-out.

Xavier, absolutely. Javan has suggested we have something like protect_from_forgery, which can be opt-out of on a per action/controller basis.