Bug 171934 - Content from loopback addresses (e.g. 127.0.0.1) should not be considered mixed content
Summary: Content from loopback addresses (e.g. 127.0.0.1) should not be considered mix...
Status: REOPENED
Alias: None
Product: WebKit
Classification: Unclassified
Component: WebCore Misc. (show other bugs)
Version: WebKit Nightly Build
Hardware: Macintosh macOS 10.12
: P2 Normal
Assignee: Nobody
URL:
Keywords: InRadar
: 173161 (view as bug list)
Depends on:
Blocks: 140625
  Show dependency treegraph
 
Reported: 2017-05-10 11:16 PDT by Birunthan Mohanathas
Modified: 2019-10-21 15:29 PDT (History)
23 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Birunthan Mohanathas 2017-05-10 11:16:01 PDT
According to the spec, content from loopback addresses should no longer be treated as mixed content even in secure origins. See:
- https://github.com/w3c/webappsec-mixed-content/commit/349501cdaa4b4dc1e2a8aacb216ced58fd316165
- https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy

In other words, e.g. `fetch('http://127.0.0.1:1234/foo/bar')` on a HTTPS site should be allowed without triggering the mixed content blocker.

Note Chrome (and soon Firefox) only whitelist '127.0.0.1' and '::1'. See:
- https://chromium.googlesource.com/chromium/src.git/+/130ee686fa00b617bfc001ceb3bb49782da2cb4e
- https://bugzilla.mozilla.org/show_bug.cgi?id=903966
Comment 1 Alexey Proskuryakov 2017-05-10 21:07:17 PDT
We should consider blocking cross origin access to localhost completely, it's a pretty terrible security risk.
Comment 2 youenn fablet 2017-05-10 21:23:52 PDT
(In reply to Alexey Proskuryakov from comment #1)
> We should consider blocking cross origin access to localhost completely,
> it's a pretty terrible security risk.

Are you suggesting to block networking from a non-localhost web page to any localhost URL?
What kind of risks are you envisioning?
Comment 3 Alexey Proskuryakov 2017-05-10 22:12:57 PDT
> Are you suggesting to block networking from a non-localhost web page to any localhost URL?

Correct.

> What kind of risks are you envisioning?

This opens up any service listening to connections on loopback interfaces to attacks of any kind. A web page can exploit request parsing bugs, or it can exfiltrate data that was meant to only be made available to a loopback counterpart.

This is similar in spirit to attacks that were recently addressed by dropping support for HTTP/0.9.
Comment 4 Birunthan Mohanathas 2017-05-10 22:23:54 PDT
(In reply to Alexey Proskuryakov from comment #1)
> We should consider blocking cross origin access to localhost completely,
> it's a pretty terrible security risk.

That would be in violation of the spec. Also note that Chrome and Firefox
Nightly allow cross origin access to 127.0.0.1 and ::1 from both HTTP and
HTTPS sites.

(In reply to Alexey Proskuryakov from comment #3)
> This opens up any service listening to connections on loopback interfaces to
> attacks of any kind. A web page can exploit request parsing bugs, or it can
> exfiltrate data that was meant to only be made available to a loopback
> counterpart.

These are valid concerns, but please note that there are legitimate use cases
localhost access. The Chromium commit message from comment 0 describes the
what people have been forced to do for these legitimate cases:

> Currently, mixed content checks block http://127.0.0.1 from loading in a
> page delivered over TLS. I'm (belatedly) coming around to the idea that
> that restriction does more harm than good. In particular, I'll note that
> folks are installing new trusted roots and self-signing certs for that
> IP address, exposing themselves to additional risk for minimal benefit.
> Helpful locally installed software is doing the same, with even more
> associated risk.

Also see the discussion in https://bugs.chromium.org/p/chromium/issues/detail?id=607878

I think a better path forward would be to allow cross origin access to
127.0.0.1 and ::1 only if the loopback server sends back the CORS headers
(i.e. Access-Control-Allow-Origin) even over HTTP.
Comment 5 youenn fablet 2017-05-10 22:34:36 PDT
I am unsure of the compatibility risk of blocking.

The same argument could somehow also be made for any internet web page trying to get access to LAN services, compatibility risk being even greater probably.

I wonder how frequent it is for services accessible through the local loop to not be accessible from the LAN.
Comment 6 Alexey Proskuryakov 2017-05-11 11:01:56 PDT
I don't see any explanation in the linked issues of why it's desirable for non-local pages to access localhost. It's incredibly unlikely to be a legitimate use of web technology.

> I wonder how frequent it is for services accessible through the local loop to not be accessible from the LAN.

That's pretty normal. Even when accessible from LAN, that's still a different security domain than any random webpage with random ad scripts.
Comment 7 youenn fablet 2017-05-11 11:38:38 PDT
I haven't looked at the links but I guess this issue is somehow orthogonal.
From a network perspective, a network intermediary will not be able to intercept any networking with localhost.
Comment 8 Birunthan Mohanathas 2017-05-30 23:58:18 PDT
(In reply to Alexey Proskuryakov from comment #6)
> I don't see any explanation in the linked issues of why it's desirable for
> non-local pages to access localhost. It's incredibly unlikely to be a
> legitimate use of web technology.

Several popular desktop applications (e.g. Spotify) install a server that binds to a localhost port. The web application (e.g. spotify.com) then uses the localhost server to control the desktop application. In order to work around the mixed-content blocker, the web application connects over HTTPS to a host (e.g. *.spotilocal.com) that simply points to 127.0.0.1:

For example:

$ dig xkbyzltjth.spotilocal.com A +short
127.0.0.1

You can see the spotilocal.com requests e.g. on this page: https://developer.spotify.com/technologies/widgets/spotify-play-button/

This ugly hack suffers from a number of problems: it doesn't work when offline due to DNS resolution failure, it doesn't work through proxies, etc.

Please keep in mind that Chrome and Firefox Nightly already allow plain HTTP connections to 127.0.0.1 without triggering the mixed content blocker. Edge is also planning to allow it (https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/11963735/). For web compatibility, please consider allowing it in Safari as well.
Comment 9 Birunthan Mohanathas 2017-05-31 00:01:58 PDT
I forgot to mention that the hack also requires a HTTPS certificate. This means that the private key of the certificate is embedded in the desktop application... (I hear some applications have even resorted to installing a root CA so that they can use a self-signed certificate...)
Comment 10 Alexey Proskuryakov 2017-06-09 14:13:16 PDT
*** Bug 173161 has been marked as a duplicate of this bug. ***
Comment 11 homakov 2017-06-09 14:34:11 PDT
>This opens up any service listening to connections on loopback interfaces to attacks of any kind. A web page can exploit request parsing bugs, or it can exfiltrate data that was meant to only be made available to a loopback counterpart.


That's kind of true, but why not just open up localhost that opts-in to be accessed? Preflight? Why kill communication entirely when there are a ton of use cases when localhost actually wants to be available?
Comment 12 homakov 2017-06-10 05:55:29 PDT
Birunthan: hey if you're looking for more or less future-proof way to talk to localhost, try opening new window on http:// protocol. Here is how we do it now: https://medium.com/@homakov/how-securelogin-invented-browser-app-communication-38383f98ca99
Comment 13 Brent Fulgham 2017-11-09 09:22:05 PST
<rdar://problem/34510778>
Comment 14 Brent Fulgham 2017-12-18 13:47:21 PST
I do not support this requested change in behavior. Allowing HTTP from localhost to be included in a secure page is a terrible idea for a few reasons:

1. There is no guarantee that the server being used is the one the page content was expecting to connect to. E.g., a trojan server running as part of an application you installed intercepts file transfer information when you go to an external cloud storage server site.

2. Content served through the local HTTP server can pull insecure information from anywhere on the internet, serve it to the hosting page, and completely undermine the protections HTTPS is supposed to provide.

We should do more to block this kind of poor design, not encourage it.
Comment 15 Brent Fulgham 2017-12-18 13:48:07 PST
(In reply to Brent Fulgham from comment #14)
> I do not support this requested change in behavior. Allowing HTTP from
> localhost to be included in a secure page is a terrible idea for a few
> reasons:
> 
> 1. There is no guarantee that the server being used is the one the page
> content was expecting to connect to. E.g., a trojan server running as part
> of an application you installed intercepts file transfer information when
> you go to an external cloud storage server site.
> 
> 2. Content served through the local HTTP server can pull insecure
> information from anywhere on the internet, serve it to the hosting page, and
> completely undermine the protections HTTPS is supposed to provide.
> 
> We should do more to block this kind of poor design, not encourage it.

Also: There's nothing to prevent /etc/hosts from directing a localhost address in the HTTPS application to some random place.
Comment 16 homakov 2017-12-18 20:43:35 PST
There are people in the thread with real world use cases who you just called poor design and offered some strawman arguments on "localhost server being bad".

>Also: There's nothing to prevent /etc/hosts from directing a localhost address in the HTTPS application to some random place.

>1. There is no guarantee that the server being used is the one the page content was expecting to connect to.

And how is this a problem for a localhost helper that verifies Origin and asks explicit confirmation to do an action for example? This design does not imply trusting 3rd party server.

>2. Content served through the local HTTP server can pull insecure information from anywhere on the internet, serve it to the hosting page, and completely undermine the protections HTTPS is supposed to provide.

Also, this localhost server can execute untrusted GET params, 

>this kind of poor design, not encourage it.

Be've been happy with behavior of Chrome on this matter and will surely recommend users to use the browser that follows web standards.

And what about all those helpers that run in localhost? Ever heard of Ethereum? New breed of authentication solutions? It is crucial to be able to talk to local daemons.

A whole new range of use cases where you cannot upgrade the browser itself but you can install a standalone daemon and let the browser talk to it.
Comment 17 Guillaume Rischard 2018-01-11 04:59:18 PST
> Also: There's nothing to prevent /etc/hosts from directing a localhost
> address in the HTTPS application to some random place.

For that reason, other browsers whitelist http://127.0.0.1, and not http://localhost.
Comment 18 Luca Cipriani 2018-05-23 05:53:30 PDT
Hello, Arduino officially speaking here.

We do have a system that HAS to interact with a local server: https://github.com/arduino/arduino-create-agent

This agent is installed already in a couple hundred thousand devices. Due to the blocking of 127.0.0.1 by webkit we are forced to create a Certificate Autority for Localhost and install it in the certificate chain, this is much worse than just allowing http://127.0.0.1/ (then obviously we remove the CA key permanently)

if you read the w3c specs in details you can see that 127.0.0.1 is considered a priori authenticated and indeed this is what both Firefox and Chrome do, they just respect the w3c specs and do not think they are better than the committee 

Again here:
https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy
is correctly stated:
If origin’s host component matches one of the CIDR notations 127.0.0.0/8 or ::1/128 [RFC4632], return "Potentially Trustworthy".

So you are hereby declaring you do not want to comply to w3c specs which is a bit *strange* for a browser engine.

127.0.0.1 is trusted because it is the same network device of the user and the app has to call it (eventually with all the CORS option needed) and is responsible for calling the right server.

In addition to that Firefox devs did a great job I think: https://dxr.mozilla.org/mozilla-central/source/dom/security/nsMixedContentBlocker.cpp#744

Now coming to your questions:

1. that only happens if the website explicitly calls a server in localhost, so there will some form of verification I hope, anyway it should not be a problem of the web engine to eventually block this apps but delegated to the web developer.

2. false, can only be done if the website which is calling localhost is passing some info, the master application is fully under control about the data will send to the 127.0.0.1:port application. 

3. you should only allow 127.0.0.1 instead of localhost, I can agree on this.

4. Consider by TLS/SSL specification there is no way to create a valid https certificate for localhost nor for 127.0.0.1 (obviously, and that is good).

5. Please provide any other alternative to this very usual scenario: you have a web app that has to talk via http to a local device to (as Arduino does) connect the web page to a serial monitor or a USB device. Consider WebUSB is a draft: https://wicg.github.io/webusb/
and WebSerial does not really exists as of now.

6. So please, can you just try to respect public specifications? Users based their application on w3c specs.

Thank you,

Luca

(In reply to Brent Fulgham from comment #14)
> I do not support this requested change in behavior. Allowing HTTP from
> localhost to be included in a secure page is a terrible idea for a few
> reasons:
> 
> 1. There is no guarantee that the server being used is the one the page
> content was expecting to connect to. E.g., a trojan server running as part
> of an application you installed intercepts file transfer information when
> you go to an external cloud storage server site.
> 
> 2. Content served through the local HTTP server can pull insecure
> information from anywhere on the internet, serve it to the hosting page, and
> completely undermine the protections HTTPS is supposed to provide.
> 
> We should do more to block this kind of poor design, not encourage it.
Comment 19 Luca Cipriani 2018-05-23 06:37:57 PDT
Edge fixed the same issue few hours ago:

https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/11963735/
Comment 20 Alexey Proskuryakov 2018-05-23 09:33:38 PDT
Having web pages access (or enumerate) local devices would have to come with a meaningful permission model, which is unlikely to exist. Asking the user anything along the lines of "www.arduino.corp.trusted.myphishingpage.cc would like to access 127.0.0.1:23764 for an unknown reason with unknown consequences, Allow/Block" wouldn't make any security sense.
Comment 21 Michael Catanzaro 2018-05-23 11:21:33 PDT
(In reply to Luca Cipriani from comment #18)
> Hello, Arduino officially speaking here.
> 
> We do have a system that HAS to interact with a local server:
> https://github.com/arduino/arduino-create-agent
> 
> This agent is installed already in a couple hundred thousand devices. Due to
> the blocking of 127.0.0.1 by webkit we are forced to create a Certificate
> Autority for Localhost and install it in the certificate chain, this is much
> worse than just allowing http://127.0.0.1/ (then obviously we remove the CA
> key permanently)

It seems like a pretty good argument in favor of reopening this issue and adopting the Firefox/Chrome behavior. Creating a certificate for 127.0.0.1 is surely worse than the alternative. And there really isn't much value in performing mixed content checks on localhost content.

(In reply to Brent Fulgham from comment #14)
> I do not support this requested change in behavior. Allowing HTTP from
> localhost to be included in a secure page is a terrible idea for a few
> reasons

I'm pretty much satisfied by the responses to this above.

(In reply to Alexey Proskuryakov from comment #20)
> Having web pages access (or enumerate) local devices would have to come with
> a meaningful permission model, which is unlikely to exist. Asking the user
> anything along the lines of "www.arduino.corp.trusted.myphishingpage.cc
> would like to access 127.0.0.1:23764 for an unknown reason with unknown
> consequences, Allow/Block" wouldn't make any security sense.

This makes more sense to me, but the problem is that such access is already allowed from http:// websites, right? Surely mixed content blocking is not the right way to enforce restrictions on accessing local content. Looking at https://bugs.chromium.org/p/chromium/issues/detail?id=607878, it looks like the mixed content spec developers have spent a lot of time thinking about this, including the link to https://mikewest.github.io/cors-rfc1918/ in comment 6.
Comment 22 Alexey Proskuryakov 2018-05-23 13:49:54 PDT
As mentioned in comment 1, I think that we should block localhost access for http too.
Comment 23 Luca Cipriani 2018-05-28 03:34:29 PDT
(In reply to Alexey Proskuryakov from comment #1)
> We should consider blocking cross origin access to localhost completely,
> it's a pretty terrible security risk.

Hi Alexey,

I can partially agree on this but there should be an alternative. Please also look at how chrome is addressing it (being discussed since March 2014

https://bugs.chromium.org/p/chromium/issues/detail?id=378566

Now the fact you can easily screw up everything anyway by just calling a plain http website that calls http://127.0.0.1 does not mean with Mixed-Content cors you are increasing the overall security of your users. in fact you are decreasing security because to use this feature users installs CA certificates.


This is what we are doing now: https://letsencrypt.org/docs/certificates-for-localhost/ plus signing every request coming from the web to verify they are coming from our specific servers, but this is a problem of the server running on localhost that needs some sort of security and authentication system. (I remember CUPS using username/pass of the root users on many systems since the early days.)

In my opinion your are not solving the security issue enforcing mixed content error for 127.0.0.1, an attacker can still circumvent it by using a plain http webstie. You will do if you completely remove 127.0.0.1 to be contacted from the web but then please provide an api to let web application contact the hardware, we are no more in '90s.

To mention Mike West which I believe is the main expert in the world about CORS policy for browsers:

https://chromium.googlesource.com/chromium/src.git/+/130ee686fa00b617bfc001ceb3bb49782da2cb4e

"Currently, mixed content checks block http://127.0.0.1 from loading in a
page delivered over TLS. I'm (belatedly) coming around to the idea that
that restriction does more harm than good. In particular, I'll note that
folks are installing new trusted roots and self-signing certs for that
IP address, exposing themselves to additional risk for minimal benefit.
Helpful locally installed software is doing the same, with even more
associated risk."

Our alternative is to tell the user just use any other browser engine than WebKit. So please let us know if you want to include the change in the roadmap or at least let us know if it is going to be WONTFIX so we can phase WebKit out or not accordingly.

Thank you!
Comment 24 Michael Catanzaro 2018-05-28 08:28:19 PDT
(In reply to Luca Cipriani from comment #23)
> To mention Mike West which I believe is the main expert in the world about
> CORS policy for browsers:

I don't know much about CORS, but at least he's definitely the authority on mixed content. In bug #140625 I'm tracking other cases where WebKit's behavior diverges from his specs. If you see any other bugs related to mixed content, adding a dependency on bug #140625 would be appreciated.

(In reply to Alexey Proskuryakov from comment #22)
> As mentioned in comment 1, I think that we should block localhost access for
> http too.

I won't comment on that whether or not WebKit should do that.

If you have a concrete plan to start blocking all localhost content in the near future, then obviously this should be WONTFIX.

But I rather doubt that will really happen. So long as WebKit continues to allow localhost access for http://, I'm pretty sure it really does not make any sense to block mixed content from 127.0.0.1. So if we treat this solely as a mixed content issue, and assume WebKit will continue to allow loading content from localhost, then we should reopen this bug.
Comment 25 Alexey Proskuryakov 2018-05-28 13:21:09 PDT
I actually think that getting users trust a certificate is better for multiple reasons.

1. It greatly reduces the impacted group, and makes it a less interesting target.

2. It requires doing something that would be a deterrent to proceeding, which is good. One may decide to limit the hack to a VM, or use a less secure secondary browser just for this purpose, or make the vendor change their approach, or decide to not work with this vendor at all. All of those are better for security.

> I can partially agree on this but there should be an alternative.

I'm not sure why you are insisting that a web browser ever needs to talk to locally installed software and hardware at all. This is low benefit and high risk.

If we had to provide an opt-in, I would argue that it should be implemented in a way that discourages its use. Installing a trusted certificate doesn't sound so bad. Another alternative could be a Developer menu option that allows 127.0.0.1 access just for the currently open window. Or maybe one can take a clue from how NPAPI plug-ins are handled by each browser.

> If you have a concrete plan to start blocking all localhost content in the near future, then obviously this should be WONTFIX.

Good point, let's make it concrete in bug 186039.
Comment 26 Nathan James 2018-08-10 04:01:08 PDT
> I'm not sure why you are insisting that a web browser ever needs to talk to locally installed software and hardware at all. This is low benefit and high risk.

This is the root of the problem here. What you think devs/users might need it for is irrelevant, there are proven use-cases of this which have been causing users to search out other browsers, some of them included in this thread. 

This seems to be direct neglect of the standard guided by a single person.
Comment 27 antoine 2018-08-15 13:17:51 PDT
This issue of blocking localhost as well as not allowing mixed content is completely blocking Safari from multiple important use cases. The IoT field and the fintech/payments field are full of use cases for talking to localhost. Example: a point of sale running in the browser needs to talk to a server on localhost to send a payment request to a terminal on the network. Everything on the web being https nowadays, the browser needs to be able to talk to a http service on the machine. Self-signed certificates are non-sense in this context.

I fail to understand the rationale to block localhost here.
- Developers are giving you valid use cases
- It's in the spec
- Other browsers implement it
- There are no workarounds

If there were credible attacks based on this feature, you'd see Chrome and Firefox users being attacked left and right. This is not the case.

Can we please allow this so that the Web doesn't take a step backwards, and so that we don't have to tell our users "oh you need to use Chrome or Firefox, this doesn't work on Safari". There's a spec - don't be an IE6 developer.
Comment 28 youenn fablet 2018-08-15 16:29:03 PDT
It is unclear to me why we relate mixed content checks with localhost access.
Attacks to localhost servers are currently easy to do no matter mixed content or not, I do not see what protection we get there.
As for web sites that get data from localhost, they are doing the requests so they should know what the security model is.

Some workarounds:
- Self signed certificates :(
- Deliver the web site through HTTP :(
- Make the connection between browser and localhost server go through a proxy: HTTPS/WSS/WebRTC. The localhost server would need to keep a connection with this proxy so that it is available or it should be 'waken up' by making the browser navigating to it.

We should think of the best way to protect Web apps/WebKit apps from these attacks (and probably LAN server access in general). Maybe an opt-in or content blockers could help there.
It is reasonable to think that some WebKit applications will want to allow access to 127.0.0.1 and for good reasons. I do not see why mixed content checks should interfere with such apps.

Aligning with the spec makes sense to me at this point.
Comment 29 homakov 2018-08-15 20:18:03 PDT
>I fail to understand the rationale to block localhost here.

Antoine, you must be new here :) Arguments have no power in the land of this ticket.

This is "Mr Proskuryakov against the world" thread. After getting dozens of reasons to make a sane default or at least follow the spec, even after getting a direct endorsement by a security expert like me, that it is indeed totally fine and safe (I fail to see he has any understanding of threat modeling and web security), nothing's changed. 

Now I keep this URL in my "this is why safari sucks" collection and to give websec friends a good laugh.
Comment 30 antoine 2018-08-15 21:52:06 PDT
Indeed “new” here but not new to web browser development - ex-Firefox dev here, back when IE was still dominant. :) I’m very proud of what we achieved in the past 20 years but disheartened when a read such a thread.

I’m truly baffled by the “I myself personally didn’t encounter any use cases so it’s obviously useless to the world” argument. I thought the web community had moved past that.

Im also baffled by some of the “security concerns” I read here. “If a Trojan is installed on the computer...”. If a Trojan is installed you have bigger things to worry about. If a decision is made in the name of security, shouldn’t a security body review it? And to that point... didn’t one ALREADY REVIEW this exact point? Have there been counter examples? Attacks in the wild? Zero day exploits? Or are we just thinking of the children?

Even looking at the future of the web, there are drafts in development to actively let the browser talk to hardware - whether Bluetooth, USB, or even through raw tcp sockets. Thinking that browsers should be banned from hardware communication is curing the disease by killing the patient. And also going against a major trend in the future of the web. Yay for native apps?

Once again: can we follow the spec and not break the web even further? Please?
Comment 31 antoine 2018-08-16 16:10:26 PDT
As an example of how much this is needed, the Chrome team even implemented a Native Messaging API. Presently available for extensions, but there is talk to bring it straight to web.
https://developer.chrome.com/extensions/nativeMessaging

Communication from web to native and back is a very real use case. It should be allowed in Webkit/Safari (with CORS to mitigate any concern), until you decide to supersede it with something better like Chrome's Native Messaging API. At least there will be a path for developers.
Comment 32 Luca Cipriani 2018-08-20 06:19:00 PDT
Firefox is going in the same direction. Better tell our users to just not use this browser.
Comment 33 oeway 2018-10-27 01:49:02 PDT
I registered an account here just for this issue, hope it can be reconsidered and fixed in the near future. 

Right now, we have to instruct the user to use Chrome and FireFox, **not Safari**.
Comment 34 Irakli Gozalishvili 2019-01-18 09:58:46 PST
(In reply to Alexey Proskuryakov from comment #25)
> I actually think that getting users trust a certificate is better for
> multiple reasons.
> 
> 1. It greatly reduces the impacted group, and makes it a less interesting
> target.
> 
> 2. It requires doing something that would be a deterrent to proceeding,
> which is good. One may decide to limit the hack to a VM, or use a less
> secure secondary browser just for this purpose, or make the vendor change
> their approach, or decide to not work with this vendor at all. All of those
> are better for security.
> 
> > I can partially agree on this but there should be an alternative.
> 
> I'm not sure why you are insisting that a web browser ever needs to talk to
> locally installed software and hardware at all. This is low benefit and high
> risk.
> 
> If we had to provide an opt-in, I would argue that it should be implemented
> in a way that discourages its use. Installing a trusted certificate doesn't
> sound so bad. Another alternative could be a Developer menu option that
> allows 127.0.0.1 access just for the currently open window. Or maybe one can
> take a clue from how NPAPI plug-ins are handled by each browser.
> 
> > If you have a concrete plan to start blocking all localhost content in the near future, then obviously this should be WONTFIX.
> 
> Good point, let's make it concrete in bug 186039.


Hi Alexey,

This thread got pretty toxic, threats of recommending other browsers is definitely not helping in driving arguments. I also understand your point that  allowing websites to talk to programs on the device does create additional security risks. However I would like to make an argument that not allowing them to talk to loopaback addresses does in fact create larger security risks:

Matter of the fact is that today due to this restriction applications are forced to do something that is much worse. They create DNS records like `local.myapp 127.0.0.1` and bundle TLS certificate + keys with an application.

Note that this does not require installing a trusted certificate root as you mentioned in the comment.


Additionally you could consider doing something along the lines of `document.requestStorageAccess` say `document.requestLoopbackAccess` and provide similar user consent prompt
https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/

Where instead of prompting user to give explicit access to site A when browsing site B, rephrase site A to "application A".
Comment 35 Tim Perry 2019-01-23 09:35:26 PST
Just to chime in here too, my application (https://httptoolkit.tech) also requires localhost access from the web. My application consists of a hosted web UI which interacts with an installed desktop service, that's used to start & manage other local applications & servers.

My app works in every modern browser except Safari, and unfortunately I'm going to have to simply tell that to users.

I can still see objections here that there's no good use case for web to localhost communication. I'd like to reiterate the evidence from this thread against that, so we can clear that argument out of the way:

* Major common applications like Spotify need this behaviour to interact with desktop applications from the web. They currently work suboptimally because of their workarounds for this (with spotilocal - see comment #8 above).

* Many hardware companies use this behaviour to build web UIs that can interact with attached hardware, including Arduino, with software depending on this running on hundreds of thousands of devices. WebUSB may resolve this for USB devices, but not yet, and only for USB devices specifically.

* There's a substantial ecosystem of Ethereum sites built entirely around localhost communication from the web: https://github.com/ethereum/web3.js

* Many developers like myself in this thread, whose applications are broken by this behaviour, in Safari only.

Imo all of these use cases are reasonable, so it's certainly not the case that there's no legitimate use cases at all.

Could anybody summarize the outstanding security concerns around this? What specific attacks would this expose users to? It would be great to try & make progress here if possible, or to find concrete security issues that could be relevant to the other browsers that have implemented this if not.
Comment 36 Maciej Stachowiak 2019-01-23 22:28:31 PST
(In reply to youenn fablet from comment #28)
> It is unclear to me why we relate mixed content checks with localhost access.
> Attacks to localhost servers are currently easy to do no matter mixed
> content or not, I do not see what protection we get there.

Let's think through this. The mixed content policy is meant to protect users from being misled into thinking they are interacting with a secure page with content from a known source, but effectively it's not, because non-https content could have been tampered with in transit. We don't want to give users a false sense of security in this case. It might not be safe to type a credit card number or a password on such a page.

The suggested risks of any access from remote pages to the loopback address are:
(1) Pages could exploit local web services that weren't meant to be accessed from an untrusted source.
(2) Trojan software could install a trap version of a local web service that aims to exploit the page making use of it.

It seems to me these threats are not properly addressed by a failed mixed content check (which would either result in an insecure indicator or a failed resource load if the referring page is http:). The first attack could be performed from an http: page, or in any case the page performing it may not care about an "insecure" warning in the location field. By the time that shows up, the attack has likely already happened, and users would not expect "insecure" to put them on notice of this. A rogue service as in (2) could still exploit pages that deploy any of the many workarounds for this limitation. Furthermore, if malware can run an http server, it cn probably do other malicious things locally to interfere with the integrity of websites.

So while it may make sense to consider limitations for remote access to local web servers, holding out on this tweak to the mixed content rules does not fulfill the purpose of mixed content rules, nor does it properly mitigate the attacks.

Therefore reopening because I think this was closed based on an incorrect rationale.


> As for web sites that get data from localhost, they are doing the requests
> so they should know what the security model is.
> 
> Some workarounds:
> - Self signed certificates :(
> - Deliver the web site through HTTP :(
> - Make the connection between browser and localhost server go through a
> proxy: HTTPS/WSS/WebRTC. The localhost server would need to keep a
> connection with this proxy so that it is available or it should be 'waken
> up' by making the browser navigating to it.
> 
> We should think of the best way to protect Web apps/WebKit apps from these
> attacks (and probably LAN server access in general). Maybe an opt-in or
> content blockers could help there.
> It is reasonable to think that some WebKit applications will want to allow
> access to 127.0.0.1 and for good reasons. I do not see why mixed content
> checks should interfere with such apps.
> 
> Aligning with the spec makes sense to me at this point.
Comment 37 Luca Cipriani 2019-01-24 00:47:41 PST
Thank you so much for reopening this issue. Let us know how we can help with the process and if you need more info on some use-cases. We have seen other projects having the same issue, here some of them:
https://github.com/arduino/arduino-create-agent/network/members


Thank you!
Comment 38 Tim Perry 2019-01-24 01:07:19 PST
Totally agree with the above, thanks for reopening this!

A couple of additional points on the two risks you pointed out, just to reinforce that they're not a concern:

> Pages could exploit local web services that weren't meant to be accessed from an untrusted source.

This same risk applies equally to any non-localhost web application. The real defence against this attack is for local web services to use CORS appropriately to manage cross-domain requests, like any other domain. That blocks these requests entirely and solves this issue (assuming localhost doesn't have any special CORS behaviour, which is true afaik).

> Trojan software could install a trap version of a local web service that aims to exploit the page making use of it.

You mentioned that malicious software running on your computer likely already poses a larger threat here, which is certainly true.

In addition though, malicious software running on your computer could easily include a valid certificate for a real domain that resolves to localhost (localhost.evil.com), and then host a secure HTTPS service on localhost, to avoid all warnings.

Even if your trojan does need to interact with a web session for some reason, it's very easy to defeat localhost mixed content protection like this.
Comment 39 homakov 2019-01-24 07:18:25 PST
Happy to see this reopened. Safari has really been hitting many nerves with this unreasonable prohibition. It's been widely concluded there is no (new) security threat that otherwise wouldn't exist anyway.

What stops from implementing it soon? Who needs to approve this? It must be a one-line change.
Comment 40 Michael Catanzaro 2019-01-24 10:00:07 PST
I think this is probably a small change in MixedContentChecker::isMixedContent in Source/WebCore/loader/MixedContentChecker.cpp.

The challenge is going to be layout tests. First, the change requires a layout test of its own. But also, all our mixed content layout tests use an Apache server running on 127.0.0.1, so all those tests would break if we fix this. I think, since we'd probably be allowing 127.0.0.1 and ::1 but not localhost, as per the spec, perhaps we could switch the URIs for all the existing mixed content tests to use localhost to verify that mixed content blocking still applies to localhost, and a new test for this bug could use 127.0.0.1 and ::1 to verify that the mixed content checks don't apply to the loopback addresses.

P.S. If anyone is interested in contributing -- remember WebKit is an open source project after all -- see https://webkit.org/contributing-code/ for tips. Changes can be approved by any reviewer, though since this is a controversial issue we'd seek consensus first.
Comment 41 Michael Catanzaro 2019-01-24 10:04:09 PST
BTW the tests are in LayoutTests/http/tests/security/mixedContent. For example, in LayoutTests/http/tests/security/mixedContent/resources/frame-with-insecure-image.html, we could try changing this:

<img src="http://127.0.0.1:8080/security/resources/compass.jpg">

(which would be broken by this change), to this:

<img src="http://localhost:8080/security/resources/compass.jpg">

(which should still be blocked).
Comment 42 Michael Catanzaro 2019-01-24 10:07:48 PST
Hm, I've spent about two minutes looking at the spec, but it does say:

If origin’s host component is "localhost" or falls within ".localhost", and the user agent conforms to the name resolution rules in [let-localhost-be-localhost], return "Potentially Trustworthy".

So... plan probably foiled.
Comment 43 Michael Catanzaro 2019-01-25 08:19:57 PST
I guess we'll need a new setting just for use by tests, and a TestController message to enable/disable it for testing purposes.
Comment 44 Rob McVey 2019-04-05 16:06:27 PDT
Thanks for reopening this issue. Just saw the release notes for Safari 12.1 and it reminded me to check on the status of this. Any updates that can be provided on this issue? I see that it's still unassigned. I for one would really appreciate it if this could be prioritized.

Thanks again!
Comment 45 antoine 2019-10-16 16:08:11 PDT
I'll echo the previous comment. Any progress on this will be greatly appreciated.
Comment 46 antoine 2019-10-18 10:25:56 PDT
Michael Catanzaro: I see that SecurityOrigin.cpp has this 

// FIXME: Ensure that localhost resolves to the loopback address. 

in

bool SecurityOrigin::isLocalHostOrLoopbackIPAddress(StringView host)

I would suggest that the fix to this bug not tackle "localhost" resolution but focus on the loopback address, and a separate bug be filed for localhost.

In that context, the fix would only be changing the function MixedContentChecker::isMixedContent line 62:

return !SecurityOrigin::isSecure(url);

to

return !(SecurityOrigin::isSecure(url) || SecurityOrigin::isLoopbackIPAddress(url));

Modifications to tests would involve replacing 127.0.0.1 to localhost at the appropriate places (which would then be modified as necessary as part of a separate bug to tackle localhost rules).

Would a fix with those changes be acceptable?
Comment 47 Michael Catanzaro 2019-10-19 07:27:03 PDT
(In reply to antoine from comment #46)
> Michael Catanzaro: I see that SecurityOrigin.cpp has this 
> 
> // FIXME: Ensure that localhost resolves to the loopback address. 
> 
> in
> 
> bool SecurityOrigin::isLocalHostOrLoopbackIPAddress(StringView host)
> 
> I would suggest that the fix to this bug not tackle "localhost" resolution
> but focus on the loopback address, and a separate bug be filed for localhost.

In fact, the FIXME is not fixable at the WebKit level. DNS resolution is performed by platform libraries. In the case of WebKitGTK and WPE, that's done by GIO, which we just fixed in https://gitlab.gnome.org/GNOME/glib/merge_requests/616. For Mac, probably either CoreFoundation or perhaps the system resolver, not sure. It would be appropriate to replace the FIXME with a comment indicating that WebKit assumes localhost is always really localhost.

(In reply to antoine from comment #46)
> In that context, the fix would only be changing the function
> MixedContentChecker::isMixedContent line 62:
> 
> return !SecurityOrigin::isSecure(url);
> 
> to
> 
> return !(SecurityOrigin::isSecure(url) ||
> SecurityOrigin::isLoopbackIPAddress(url));

Nice investigation!
 
> Modifications to tests would involve replacing 127.0.0.1 to localhost at the
> appropriate places (which would then be modified as necessary as part of a
> separate bug to tackle localhost rules).

I'm not sure if it will be that easy. E.g. this change will likely break all the mixed content tests. I think we will just need to have a setting that tests can use to choose which behavior they get. See my suggestion in comment #43

> Would a fix with those changes be acceptable?

I *believe* we have consensus on this change at this point, so as long as there's a new test and it doesn't break old tests, I think so. Seems clear that the test work will be harder than the change itself.
Comment 48 Michael Catanzaro 2019-10-19 07:28:38 PDT
(In reply to Michael Catanzaro from comment #47)
>It would be
> appropriate to replace the FIXME with a comment indicating that WebKit
> assumes localhost is always really localhost.

Of course, it would be a good idea for someone familiar with macOS or iOS to check what really happens on Apple platforms before doing so.
Comment 49 antoine 2019-10-19 23:04:17 PDT
> > Modifications to tests would involve replacing 127.0.0.1 to localhost at the
> > appropriate places (which would then be modified as necessary as part of a
> > separate bug to tackle localhost rules).
> 
> I'm not sure if it will be that easy. E.g. this change will likely break all
> the mixed content tests. I think we will just need to have a setting that
> tests can use to choose which behavior they get. See my suggestion in
> comment #43

Makes sense - i actually got things to work by swapping 127.0.0.1 for localhost in the mixed content tests (along with the string in the expected result) but i guess the testcontroller is a cleaner approach. I'll give it a shot in a separate branch. Thanks Michael!
Comment 50 Michael Catanzaro 2019-10-20 10:03:10 PDT
(In reply to antoine from comment #49)
> Makes sense - i actually got things to work by swapping 127.0.0.1 for
> localhost in the mixed content tests (along with the string in the expected
> result) but i guess the testcontroller is a cleaner approach. I'll give it a
> shot in a separate branch. Thanks Michael!

Oh, so you chose to whitelist only 127.0.0.1 and ::1, and not also localhost. In that case, modifying TestController is of course not required.

If you want to whitelist localhost as well -- which I expect is desired -- then you will need to add a TestController setting to make the tests pass.

But it's also fine to start out by whitelisting 127.0.0.1 and ::1, and leave localhost for a follow-up patch.

(In reply to antoine from comment #46)
> In that context, the fix would only be changing the function
> MixedContentChecker::isMixedContent line 62:
> 
> return !SecurityOrigin::isSecure(url);
> 
> to
> 
> return !(SecurityOrigin::isSecure(url) ||
> SecurityOrigin::isLoopbackIPAddress(url));

Actually, it would be better to change SecurityOrigin::isSecure directly instead, since loopback can be trusted for all purposes, not just mixed content checking.
Comment 51 antoine 2019-10-20 12:31:12 PDT
(In reply to Michael Catanzaro from comment #50)
> Oh, so you chose to whitelist only 127.0.0.1 and ::1, and not also
> localhost. In that case, modifying TestController is of course not required.
> 
> If you want to whitelist localhost as well -- which I expect is desired --
> then you will need to add a TestController setting to make the tests pass.
> 
> But it's also fine to start out by whitelisting 127.0.0.1 and ::1, and leave
> localhost for a follow-up patch.

Sounds good - that's the approach i'm more comfortable with as i'm not certain of the implications of whitelisting localhost (see https://www.w3.org/TR/secure-contexts/#localhost "Given that uncertainty, this document errs on the conservative side by special-casing 127.0.0.1, but not localhost.").


> Actually, it would be better to change SecurityOrigin::isSecure directly
> instead, since loopback can be trusted for all purposes, not just mixed
> content checking.

Makes sense - will make the modification.

This should allow all present tests to pass. In terms of new tests - should we duplicate all of the mixed-content tests to check for 127.0.0.1 / ::1 or have only one test for that specific use case?
Comment 52 antoine 2019-10-21 15:29:12 PDT
Michael - i have a patch ready to go. Insights on any new tests to add would be appreciated as this is my first contribution to Webkit. Thanks!