Bug 171934 - Don't treat loopback addresses (127.0.0.0/8, ::1/128, localhost, .localhost) as mixed content
Summary: Don't treat loopback addresses (127.0.0.0/8, ::1/128, localhost, .localhost) ...
Status: NEW
Alias: None
Product: WebKit
Classification: Unclassified
Component: WebCore Misc. (show other bugs)
Version: WebKit Nightly Build
Hardware: Mac macOS 10.12
: P2 Normal
Assignee: Nobody
URL:
Keywords: InRadar
: 173161 (view as bug list)
Depends on: 127676 218623 218627 218795 219257 250607
Blocks: 140625 250776
  Show dependency treegraph
 
Reported: 2017-05-10 11:16 PDT by Birunthan Mohanathas
Modified: 2023-06-29 17:33 PDT (History)
62 users (show)

See Also:


Attachments
WIP Patch (15.92 KB, patch)
2020-10-30 10:35 PDT, Frédéric Wang (:fredw)
ews-feeder: commit-queue-
Details | Formatted Diff | Diff
WIP Patch (9.33 KB, patch)
2020-10-30 23:36 PDT, Frédéric Wang (:fredw)
ews-feeder: commit-queue-
Details | Formatted Diff | Diff
WIP Patch (44.76 KB, patch)
2020-10-31 08:21 PDT, Frédéric Wang (:fredw)
ews-feeder: commit-queue-
Details | Formatted Diff | Diff
Patch (69.52 KB, patch)
2020-11-04 07:15 PST, Frédéric Wang (:fredw)
ews-feeder: commit-queue-
Details | Formatted Diff | Diff
Patch (81.09 KB, patch)
2020-11-04 23:54 PST, Frédéric Wang (:fredw)
ews-feeder: commit-queue-
Details | Formatted Diff | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Birunthan Mohanathas 2017-05-10 11:16:01 PDT
According to the spec, content from loopback addresses should no longer be treated as mixed content even in secure origins. See:
- https://github.com/w3c/webappsec-mixed-content/commit/349501cdaa4b4dc1e2a8aacb216ced58fd316165
- https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy

In other words, e.g. `fetch('http://127.0.0.1:1234/foo/bar')` on a HTTPS site should be allowed without triggering the mixed content blocker.

Note Chrome (and soon Firefox) only whitelist '127.0.0.1' and '::1'. See:
- https://chromium.googlesource.com/chromium/src.git/+/130ee686fa00b617bfc001ceb3bb49782da2cb4e
- https://bugzilla.mozilla.org/show_bug.cgi?id=903966
Comment 1 Alexey Proskuryakov 2017-05-10 21:07:17 PDT
We should consider blocking cross origin access to localhost completely, it's a pretty terrible security risk.
Comment 2 youenn fablet 2017-05-10 21:23:52 PDT
(In reply to Alexey Proskuryakov from comment #1)
> We should consider blocking cross origin access to localhost completely,
> it's a pretty terrible security risk.

Are you suggesting to block networking from a non-localhost web page to any localhost URL?
What kind of risks are you envisioning?
Comment 3 Alexey Proskuryakov 2017-05-10 22:12:57 PDT
> Are you suggesting to block networking from a non-localhost web page to any localhost URL?

Correct.

> What kind of risks are you envisioning?

This opens up any service listening to connections on loopback interfaces to attacks of any kind. A web page can exploit request parsing bugs, or it can exfiltrate data that was meant to only be made available to a loopback counterpart.

This is similar in spirit to attacks that were recently addressed by dropping support for HTTP/0.9.
Comment 4 Birunthan Mohanathas 2017-05-10 22:23:54 PDT
(In reply to Alexey Proskuryakov from comment #1)
> We should consider blocking cross origin access to localhost completely,
> it's a pretty terrible security risk.

That would be in violation of the spec. Also note that Chrome and Firefox
Nightly allow cross origin access to 127.0.0.1 and ::1 from both HTTP and
HTTPS sites.

(In reply to Alexey Proskuryakov from comment #3)
> This opens up any service listening to connections on loopback interfaces to
> attacks of any kind. A web page can exploit request parsing bugs, or it can
> exfiltrate data that was meant to only be made available to a loopback
> counterpart.

These are valid concerns, but please note that there are legitimate use cases
localhost access. The Chromium commit message from comment 0 describes the
what people have been forced to do for these legitimate cases:

> Currently, mixed content checks block http://127.0.0.1 from loading in a
> page delivered over TLS. I'm (belatedly) coming around to the idea that
> that restriction does more harm than good. In particular, I'll note that
> folks are installing new trusted roots and self-signing certs for that
> IP address, exposing themselves to additional risk for minimal benefit.
> Helpful locally installed software is doing the same, with even more
> associated risk.

Also see the discussion in https://bugs.chromium.org/p/chromium/issues/detail?id=607878

I think a better path forward would be to allow cross origin access to
127.0.0.1 and ::1 only if the loopback server sends back the CORS headers
(i.e. Access-Control-Allow-Origin) even over HTTP.
Comment 5 youenn fablet 2017-05-10 22:34:36 PDT
I am unsure of the compatibility risk of blocking.

The same argument could somehow also be made for any internet web page trying to get access to LAN services, compatibility risk being even greater probably.

I wonder how frequent it is for services accessible through the local loop to not be accessible from the LAN.
Comment 6 Alexey Proskuryakov 2017-05-11 11:01:56 PDT
I don't see any explanation in the linked issues of why it's desirable for non-local pages to access localhost. It's incredibly unlikely to be a legitimate use of web technology.

> I wonder how frequent it is for services accessible through the local loop to not be accessible from the LAN.

That's pretty normal. Even when accessible from LAN, that's still a different security domain than any random webpage with random ad scripts.
Comment 7 youenn fablet 2017-05-11 11:38:38 PDT
I haven't looked at the links but I guess this issue is somehow orthogonal.
From a network perspective, a network intermediary will not be able to intercept any networking with localhost.
Comment 8 Birunthan Mohanathas 2017-05-30 23:58:18 PDT
(In reply to Alexey Proskuryakov from comment #6)
> I don't see any explanation in the linked issues of why it's desirable for
> non-local pages to access localhost. It's incredibly unlikely to be a
> legitimate use of web technology.

Several popular desktop applications (e.g. Spotify) install a server that binds to a localhost port. The web application (e.g. spotify.com) then uses the localhost server to control the desktop application. In order to work around the mixed-content blocker, the web application connects over HTTPS to a host (e.g. *.spotilocal.com) that simply points to 127.0.0.1:

For example:

$ dig xkbyzltjth.spotilocal.com A +short
127.0.0.1

You can see the spotilocal.com requests e.g. on this page: https://developer.spotify.com/technologies/widgets/spotify-play-button/

This ugly hack suffers from a number of problems: it doesn't work when offline due to DNS resolution failure, it doesn't work through proxies, etc.

Please keep in mind that Chrome and Firefox Nightly already allow plain HTTP connections to 127.0.0.1 without triggering the mixed content blocker. Edge is also planning to allow it (https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/11963735/). For web compatibility, please consider allowing it in Safari as well.
Comment 9 Birunthan Mohanathas 2017-05-31 00:01:58 PDT
I forgot to mention that the hack also requires a HTTPS certificate. This means that the private key of the certificate is embedded in the desktop application... (I hear some applications have even resorted to installing a root CA so that they can use a self-signed certificate...)
Comment 10 Alexey Proskuryakov 2017-06-09 14:13:16 PDT
*** Bug 173161 has been marked as a duplicate of this bug. ***
Comment 11 homakov 2017-06-09 14:34:11 PDT
>This opens up any service listening to connections on loopback interfaces to attacks of any kind. A web page can exploit request parsing bugs, or it can exfiltrate data that was meant to only be made available to a loopback counterpart.


That's kind of true, but why not just open up localhost that opts-in to be accessed? Preflight? Why kill communication entirely when there are a ton of use cases when localhost actually wants to be available?
Comment 12 homakov 2017-06-10 05:55:29 PDT
Birunthan: hey if you're looking for more or less future-proof way to talk to localhost, try opening new window on http:// protocol. Here is how we do it now: https://medium.com/@homakov/how-securelogin-invented-browser-app-communication-38383f98ca99
Comment 13 Brent Fulgham 2017-11-09 09:22:05 PST
<rdar://problem/34510778>
Comment 14 Brent Fulgham 2017-12-18 13:47:21 PST
I do not support this requested change in behavior. Allowing HTTP from localhost to be included in a secure page is a terrible idea for a few reasons:

1. There is no guarantee that the server being used is the one the page content was expecting to connect to. E.g., a trojan server running as part of an application you installed intercepts file transfer information when you go to an external cloud storage server site.

2. Content served through the local HTTP server can pull insecure information from anywhere on the internet, serve it to the hosting page, and completely undermine the protections HTTPS is supposed to provide.

We should do more to block this kind of poor design, not encourage it.
Comment 15 Brent Fulgham 2017-12-18 13:48:07 PST
(In reply to Brent Fulgham from comment #14)
> I do not support this requested change in behavior. Allowing HTTP from
> localhost to be included in a secure page is a terrible idea for a few
> reasons:
> 
> 1. There is no guarantee that the server being used is the one the page
> content was expecting to connect to. E.g., a trojan server running as part
> of an application you installed intercepts file transfer information when
> you go to an external cloud storage server site.
> 
> 2. Content served through the local HTTP server can pull insecure
> information from anywhere on the internet, serve it to the hosting page, and
> completely undermine the protections HTTPS is supposed to provide.
> 
> We should do more to block this kind of poor design, not encourage it.

Also: There's nothing to prevent /etc/hosts from directing a localhost address in the HTTPS application to some random place.
Comment 16 homakov 2017-12-18 20:43:35 PST
There are people in the thread with real world use cases who you just called poor design and offered some strawman arguments on "localhost server being bad".

>Also: There's nothing to prevent /etc/hosts from directing a localhost address in the HTTPS application to some random place.

>1. There is no guarantee that the server being used is the one the page content was expecting to connect to.

And how is this a problem for a localhost helper that verifies Origin and asks explicit confirmation to do an action for example? This design does not imply trusting 3rd party server.

>2. Content served through the local HTTP server can pull insecure information from anywhere on the internet, serve it to the hosting page, and completely undermine the protections HTTPS is supposed to provide.

Also, this localhost server can execute untrusted GET params, 

>this kind of poor design, not encourage it.

Be've been happy with behavior of Chrome on this matter and will surely recommend users to use the browser that follows web standards.

And what about all those helpers that run in localhost? Ever heard of Ethereum? New breed of authentication solutions? It is crucial to be able to talk to local daemons.

A whole new range of use cases where you cannot upgrade the browser itself but you can install a standalone daemon and let the browser talk to it.
Comment 17 Guillaume Rischard 2018-01-11 04:59:18 PST
> Also: There's nothing to prevent /etc/hosts from directing a localhost
> address in the HTTPS application to some random place.

For that reason, other browsers whitelist http://127.0.0.1, and not http://localhost.
Comment 18 Luca Cipriani 2018-05-23 05:53:30 PDT
Hello, Arduino officially speaking here.

We do have a system that HAS to interact with a local server: https://github.com/arduino/arduino-create-agent

This agent is installed already in a couple hundred thousand devices. Due to the blocking of 127.0.0.1 by webkit we are forced to create a Certificate Autority for Localhost and install it in the certificate chain, this is much worse than just allowing http://127.0.0.1/ (then obviously we remove the CA key permanently)

if you read the w3c specs in details you can see that 127.0.0.1 is considered a priori authenticated and indeed this is what both Firefox and Chrome do, they just respect the w3c specs and do not think they are better than the committee 

Again here:
https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy
is correctly stated:
If origin’s host component matches one of the CIDR notations 127.0.0.0/8 or ::1/128 [RFC4632], return "Potentially Trustworthy".

So you are hereby declaring you do not want to comply to w3c specs which is a bit *strange* for a browser engine.

127.0.0.1 is trusted because it is the same network device of the user and the app has to call it (eventually with all the CORS option needed) and is responsible for calling the right server.

In addition to that Firefox devs did a great job I think: https://dxr.mozilla.org/mozilla-central/source/dom/security/nsMixedContentBlocker.cpp#744

Now coming to your questions:

1. that only happens if the website explicitly calls a server in localhost, so there will some form of verification I hope, anyway it should not be a problem of the web engine to eventually block this apps but delegated to the web developer.

2. false, can only be done if the website which is calling localhost is passing some info, the master application is fully under control about the data will send to the 127.0.0.1:port application. 

3. you should only allow 127.0.0.1 instead of localhost, I can agree on this.

4. Consider by TLS/SSL specification there is no way to create a valid https certificate for localhost nor for 127.0.0.1 (obviously, and that is good).

5. Please provide any other alternative to this very usual scenario: you have a web app that has to talk via http to a local device to (as Arduino does) connect the web page to a serial monitor or a USB device. Consider WebUSB is a draft: https://wicg.github.io/webusb/
and WebSerial does not really exists as of now.

6. So please, can you just try to respect public specifications? Users based their application on w3c specs.

Thank you,

Luca

(In reply to Brent Fulgham from comment #14)
> I do not support this requested change in behavior. Allowing HTTP from
> localhost to be included in a secure page is a terrible idea for a few
> reasons:
> 
> 1. There is no guarantee that the server being used is the one the page
> content was expecting to connect to. E.g., a trojan server running as part
> of an application you installed intercepts file transfer information when
> you go to an external cloud storage server site.
> 
> 2. Content served through the local HTTP server can pull insecure
> information from anywhere on the internet, serve it to the hosting page, and
> completely undermine the protections HTTPS is supposed to provide.
> 
> We should do more to block this kind of poor design, not encourage it.
Comment 19 Luca Cipriani 2018-05-23 06:37:57 PDT
Edge fixed the same issue few hours ago:

https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/11963735/
Comment 20 Alexey Proskuryakov 2018-05-23 09:33:38 PDT
Having web pages access (or enumerate) local devices would have to come with a meaningful permission model, which is unlikely to exist. Asking the user anything along the lines of "www.arduino.corp.trusted.myphishingpage.cc would like to access 127.0.0.1:23764 for an unknown reason with unknown consequences, Allow/Block" wouldn't make any security sense.
Comment 21 Michael Catanzaro 2018-05-23 11:21:33 PDT
(In reply to Luca Cipriani from comment #18)
> Hello, Arduino officially speaking here.
> 
> We do have a system that HAS to interact with a local server:
> https://github.com/arduino/arduino-create-agent
> 
> This agent is installed already in a couple hundred thousand devices. Due to
> the blocking of 127.0.0.1 by webkit we are forced to create a Certificate
> Autority for Localhost and install it in the certificate chain, this is much
> worse than just allowing http://127.0.0.1/ (then obviously we remove the CA
> key permanently)

It seems like a pretty good argument in favor of reopening this issue and adopting the Firefox/Chrome behavior. Creating a certificate for 127.0.0.1 is surely worse than the alternative. And there really isn't much value in performing mixed content checks on localhost content.

(In reply to Brent Fulgham from comment #14)
> I do not support this requested change in behavior. Allowing HTTP from
> localhost to be included in a secure page is a terrible idea for a few
> reasons

I'm pretty much satisfied by the responses to this above.

(In reply to Alexey Proskuryakov from comment #20)
> Having web pages access (or enumerate) local devices would have to come with
> a meaningful permission model, which is unlikely to exist. Asking the user
> anything along the lines of "www.arduino.corp.trusted.myphishingpage.cc
> would like to access 127.0.0.1:23764 for an unknown reason with unknown
> consequences, Allow/Block" wouldn't make any security sense.

This makes more sense to me, but the problem is that such access is already allowed from http:// websites, right? Surely mixed content blocking is not the right way to enforce restrictions on accessing local content. Looking at https://bugs.chromium.org/p/chromium/issues/detail?id=607878, it looks like the mixed content spec developers have spent a lot of time thinking about this, including the link to https://mikewest.github.io/cors-rfc1918/ in comment 6.
Comment 22 Alexey Proskuryakov 2018-05-23 13:49:54 PDT
As mentioned in comment 1, I think that we should block localhost access for http too.
Comment 23 Luca Cipriani 2018-05-28 03:34:29 PDT
(In reply to Alexey Proskuryakov from comment #1)
> We should consider blocking cross origin access to localhost completely,
> it's a pretty terrible security risk.

Hi Alexey,

I can partially agree on this but there should be an alternative. Please also look at how chrome is addressing it (being discussed since March 2014

https://bugs.chromium.org/p/chromium/issues/detail?id=378566

Now the fact you can easily screw up everything anyway by just calling a plain http website that calls http://127.0.0.1 does not mean with Mixed-Content cors you are increasing the overall security of your users. in fact you are decreasing security because to use this feature users installs CA certificates.


This is what we are doing now: https://letsencrypt.org/docs/certificates-for-localhost/ plus signing every request coming from the web to verify they are coming from our specific servers, but this is a problem of the server running on localhost that needs some sort of security and authentication system. (I remember CUPS using username/pass of the root users on many systems since the early days.)

In my opinion your are not solving the security issue enforcing mixed content error for 127.0.0.1, an attacker can still circumvent it by using a plain http webstie. You will do if you completely remove 127.0.0.1 to be contacted from the web but then please provide an api to let web application contact the hardware, we are no more in '90s.

To mention Mike West which I believe is the main expert in the world about CORS policy for browsers:

https://chromium.googlesource.com/chromium/src.git/+/130ee686fa00b617bfc001ceb3bb49782da2cb4e

"Currently, mixed content checks block http://127.0.0.1 from loading in a
page delivered over TLS. I'm (belatedly) coming around to the idea that
that restriction does more harm than good. In particular, I'll note that
folks are installing new trusted roots and self-signing certs for that
IP address, exposing themselves to additional risk for minimal benefit.
Helpful locally installed software is doing the same, with even more
associated risk."

Our alternative is to tell the user just use any other browser engine than WebKit. So please let us know if you want to include the change in the roadmap or at least let us know if it is going to be WONTFIX so we can phase WebKit out or not accordingly.

Thank you!
Comment 24 Michael Catanzaro 2018-05-28 08:28:19 PDT
(In reply to Luca Cipriani from comment #23)
> To mention Mike West which I believe is the main expert in the world about
> CORS policy for browsers:

I don't know much about CORS, but at least he's definitely the authority on mixed content. In bug #140625 I'm tracking other cases where WebKit's behavior diverges from his specs. If you see any other bugs related to mixed content, adding a dependency on bug #140625 would be appreciated.

(In reply to Alexey Proskuryakov from comment #22)
> As mentioned in comment 1, I think that we should block localhost access for
> http too.

I won't comment on that whether or not WebKit should do that.

If you have a concrete plan to start blocking all localhost content in the near future, then obviously this should be WONTFIX.

But I rather doubt that will really happen. So long as WebKit continues to allow localhost access for http://, I'm pretty sure it really does not make any sense to block mixed content from 127.0.0.1. So if we treat this solely as a mixed content issue, and assume WebKit will continue to allow loading content from localhost, then we should reopen this bug.
Comment 25 Alexey Proskuryakov 2018-05-28 13:21:09 PDT
I actually think that getting users trust a certificate is better for multiple reasons.

1. It greatly reduces the impacted group, and makes it a less interesting target.

2. It requires doing something that would be a deterrent to proceeding, which is good. One may decide to limit the hack to a VM, or use a less secure secondary browser just for this purpose, or make the vendor change their approach, or decide to not work with this vendor at all. All of those are better for security.

> I can partially agree on this but there should be an alternative.

I'm not sure why you are insisting that a web browser ever needs to talk to locally installed software and hardware at all. This is low benefit and high risk.

If we had to provide an opt-in, I would argue that it should be implemented in a way that discourages its use. Installing a trusted certificate doesn't sound so bad. Another alternative could be a Developer menu option that allows 127.0.0.1 access just for the currently open window. Or maybe one can take a clue from how NPAPI plug-ins are handled by each browser.

> If you have a concrete plan to start blocking all localhost content in the near future, then obviously this should be WONTFIX.

Good point, let's make it concrete in bug 186039.
Comment 26 Nathan James 2018-08-10 04:01:08 PDT
> I'm not sure why you are insisting that a web browser ever needs to talk to locally installed software and hardware at all. This is low benefit and high risk.

This is the root of the problem here. What you think devs/users might need it for is irrelevant, there are proven use-cases of this which have been causing users to search out other browsers, some of them included in this thread. 

This seems to be direct neglect of the standard guided by a single person.
Comment 27 antoine 2018-08-15 13:17:51 PDT
This issue of blocking localhost as well as not allowing mixed content is completely blocking Safari from multiple important use cases. The IoT field and the fintech/payments field are full of use cases for talking to localhost. Example: a point of sale running in the browser needs to talk to a server on localhost to send a payment request to a terminal on the network. Everything on the web being https nowadays, the browser needs to be able to talk to a http service on the machine. Self-signed certificates are non-sense in this context.

I fail to understand the rationale to block localhost here.
- Developers are giving you valid use cases
- It's in the spec
- Other browsers implement it
- There are no workarounds

If there were credible attacks based on this feature, you'd see Chrome and Firefox users being attacked left and right. This is not the case.

Can we please allow this so that the Web doesn't take a step backwards, and so that we don't have to tell our users "oh you need to use Chrome or Firefox, this doesn't work on Safari". There's a spec - don't be an IE6 developer.
Comment 28 youenn fablet 2018-08-15 16:29:03 PDT
It is unclear to me why we relate mixed content checks with localhost access.
Attacks to localhost servers are currently easy to do no matter mixed content or not, I do not see what protection we get there.
As for web sites that get data from localhost, they are doing the requests so they should know what the security model is.

Some workarounds:
- Self signed certificates :(
- Deliver the web site through HTTP :(
- Make the connection between browser and localhost server go through a proxy: HTTPS/WSS/WebRTC. The localhost server would need to keep a connection with this proxy so that it is available or it should be 'waken up' by making the browser navigating to it.

We should think of the best way to protect Web apps/WebKit apps from these attacks (and probably LAN server access in general). Maybe an opt-in or content blockers could help there.
It is reasonable to think that some WebKit applications will want to allow access to 127.0.0.1 and for good reasons. I do not see why mixed content checks should interfere with such apps.

Aligning with the spec makes sense to me at this point.
Comment 29 homakov 2018-08-15 20:18:03 PDT
>I fail to understand the rationale to block localhost here.

Antoine, you must be new here :) Arguments have no power in the land of this ticket.

This is "Mr Proskuryakov against the world" thread. After getting dozens of reasons to make a sane default or at least follow the spec, even after getting a direct endorsement by a security expert like me, that it is indeed totally fine and safe (I fail to see he has any understanding of threat modeling and web security), nothing's changed. 

Now I keep this URL in my "this is why safari sucks" collection and to give websec friends a good laugh.
Comment 30 antoine 2018-08-15 21:52:06 PDT
Indeed “new” here but not new to web browser development - ex-Firefox dev here, back when IE was still dominant. :) I’m very proud of what we achieved in the past 20 years but disheartened when a read such a thread.

I’m truly baffled by the “I myself personally didn’t encounter any use cases so it’s obviously useless to the world” argument. I thought the web community had moved past that.

Im also baffled by some of the “security concerns” I read here. “If a Trojan is installed on the computer...”. If a Trojan is installed you have bigger things to worry about. If a decision is made in the name of security, shouldn’t a security body review it? And to that point... didn’t one ALREADY REVIEW this exact point? Have there been counter examples? Attacks in the wild? Zero day exploits? Or are we just thinking of the children?

Even looking at the future of the web, there are drafts in development to actively let the browser talk to hardware - whether Bluetooth, USB, or even through raw tcp sockets. Thinking that browsers should be banned from hardware communication is curing the disease by killing the patient. And also going against a major trend in the future of the web. Yay for native apps?

Once again: can we follow the spec and not break the web even further? Please?
Comment 31 antoine 2018-08-16 16:10:26 PDT
As an example of how much this is needed, the Chrome team even implemented a Native Messaging API. Presently available for extensions, but there is talk to bring it straight to web.
https://developer.chrome.com/extensions/nativeMessaging

Communication from web to native and back is a very real use case. It should be allowed in Webkit/Safari (with CORS to mitigate any concern), until you decide to supersede it with something better like Chrome's Native Messaging API. At least there will be a path for developers.
Comment 32 Luca Cipriani 2018-08-20 06:19:00 PDT
Firefox is going in the same direction. Better tell our users to just not use this browser.
Comment 33 oeway 2018-10-27 01:49:02 PDT
I registered an account here just for this issue, hope it can be reconsidered and fixed in the near future. 

Right now, we have to instruct the user to use Chrome and FireFox, **not Safari**.
Comment 34 Irakli Gozalishvili 2019-01-18 09:58:46 PST
(In reply to Alexey Proskuryakov from comment #25)
> I actually think that getting users trust a certificate is better for
> multiple reasons.
> 
> 1. It greatly reduces the impacted group, and makes it a less interesting
> target.
> 
> 2. It requires doing something that would be a deterrent to proceeding,
> which is good. One may decide to limit the hack to a VM, or use a less
> secure secondary browser just for this purpose, or make the vendor change
> their approach, or decide to not work with this vendor at all. All of those
> are better for security.
> 
> > I can partially agree on this but there should be an alternative.
> 
> I'm not sure why you are insisting that a web browser ever needs to talk to
> locally installed software and hardware at all. This is low benefit and high
> risk.
> 
> If we had to provide an opt-in, I would argue that it should be implemented
> in a way that discourages its use. Installing a trusted certificate doesn't
> sound so bad. Another alternative could be a Developer menu option that
> allows 127.0.0.1 access just for the currently open window. Or maybe one can
> take a clue from how NPAPI plug-ins are handled by each browser.
> 
> > If you have a concrete plan to start blocking all localhost content in the near future, then obviously this should be WONTFIX.
> 
> Good point, let's make it concrete in bug 186039.


Hi Alexey,

This thread got pretty toxic, threats of recommending other browsers is definitely not helping in driving arguments. I also understand your point that  allowing websites to talk to programs on the device does create additional security risks. However I would like to make an argument that not allowing them to talk to loopaback addresses does in fact create larger security risks:

Matter of the fact is that today due to this restriction applications are forced to do something that is much worse. They create DNS records like `local.myapp 127.0.0.1` and bundle TLS certificate + keys with an application.

Note that this does not require installing a trusted certificate root as you mentioned in the comment.


Additionally you could consider doing something along the lines of `document.requestStorageAccess` say `document.requestLoopbackAccess` and provide similar user consent prompt
https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/

Where instead of prompting user to give explicit access to site A when browsing site B, rephrase site A to "application A".
Comment 35 Tim Perry 2019-01-23 09:35:26 PST
Just to chime in here too, my application (https://httptoolkit.tech) also requires localhost access from the web. My application consists of a hosted web UI which interacts with an installed desktop service, that's used to start & manage other local applications & servers.

My app works in every modern browser except Safari, and unfortunately I'm going to have to simply tell that to users.

I can still see objections here that there's no good use case for web to localhost communication. I'd like to reiterate the evidence from this thread against that, so we can clear that argument out of the way:

* Major common applications like Spotify need this behaviour to interact with desktop applications from the web. They currently work suboptimally because of their workarounds for this (with spotilocal - see comment #8 above).

* Many hardware companies use this behaviour to build web UIs that can interact with attached hardware, including Arduino, with software depending on this running on hundreds of thousands of devices. WebUSB may resolve this for USB devices, but not yet, and only for USB devices specifically.

* There's a substantial ecosystem of Ethereum sites built entirely around localhost communication from the web: https://github.com/ethereum/web3.js

* Many developers like myself in this thread, whose applications are broken by this behaviour, in Safari only.

Imo all of these use cases are reasonable, so it's certainly not the case that there's no legitimate use cases at all.

Could anybody summarize the outstanding security concerns around this? What specific attacks would this expose users to? It would be great to try & make progress here if possible, or to find concrete security issues that could be relevant to the other browsers that have implemented this if not.
Comment 36 Maciej Stachowiak 2019-01-23 22:28:31 PST
(In reply to youenn fablet from comment #28)
> It is unclear to me why we relate mixed content checks with localhost access.
> Attacks to localhost servers are currently easy to do no matter mixed
> content or not, I do not see what protection we get there.

Let's think through this. The mixed content policy is meant to protect users from being misled into thinking they are interacting with a secure page with content from a known source, but effectively it's not, because non-https content could have been tampered with in transit. We don't want to give users a false sense of security in this case. It might not be safe to type a credit card number or a password on such a page.

The suggested risks of any access from remote pages to the loopback address are:
(1) Pages could exploit local web services that weren't meant to be accessed from an untrusted source.
(2) Trojan software could install a trap version of a local web service that aims to exploit the page making use of it.

It seems to me these threats are not properly addressed by a failed mixed content check (which would either result in an insecure indicator or a failed resource load if the referring page is http:). The first attack could be performed from an http: page, or in any case the page performing it may not care about an "insecure" warning in the location field. By the time that shows up, the attack has likely already happened, and users would not expect "insecure" to put them on notice of this. A rogue service as in (2) could still exploit pages that deploy any of the many workarounds for this limitation. Furthermore, if malware can run an http server, it cn probably do other malicious things locally to interfere with the integrity of websites.

So while it may make sense to consider limitations for remote access to local web servers, holding out on this tweak to the mixed content rules does not fulfill the purpose of mixed content rules, nor does it properly mitigate the attacks.

Therefore reopening because I think this was closed based on an incorrect rationale.


> As for web sites that get data from localhost, they are doing the requests
> so they should know what the security model is.
> 
> Some workarounds:
> - Self signed certificates :(
> - Deliver the web site through HTTP :(
> - Make the connection between browser and localhost server go through a
> proxy: HTTPS/WSS/WebRTC. The localhost server would need to keep a
> connection with this proxy so that it is available or it should be 'waken
> up' by making the browser navigating to it.
> 
> We should think of the best way to protect Web apps/WebKit apps from these
> attacks (and probably LAN server access in general). Maybe an opt-in or
> content blockers could help there.
> It is reasonable to think that some WebKit applications will want to allow
> access to 127.0.0.1 and for good reasons. I do not see why mixed content
> checks should interfere with such apps.
> 
> Aligning with the spec makes sense to me at this point.
Comment 37 Luca Cipriani 2019-01-24 00:47:41 PST
Thank you so much for reopening this issue. Let us know how we can help with the process and if you need more info on some use-cases. We have seen other projects having the same issue, here some of them:
https://github.com/arduino/arduino-create-agent/network/members


Thank you!
Comment 38 Tim Perry 2019-01-24 01:07:19 PST
Totally agree with the above, thanks for reopening this!

A couple of additional points on the two risks you pointed out, just to reinforce that they're not a concern:

> Pages could exploit local web services that weren't meant to be accessed from an untrusted source.

This same risk applies equally to any non-localhost web application. The real defence against this attack is for local web services to use CORS appropriately to manage cross-domain requests, like any other domain. That blocks these requests entirely and solves this issue (assuming localhost doesn't have any special CORS behaviour, which is true afaik).

> Trojan software could install a trap version of a local web service that aims to exploit the page making use of it.

You mentioned that malicious software running on your computer likely already poses a larger threat here, which is certainly true.

In addition though, malicious software running on your computer could easily include a valid certificate for a real domain that resolves to localhost (localhost.evil.com), and then host a secure HTTPS service on localhost, to avoid all warnings.

Even if your trojan does need to interact with a web session for some reason, it's very easy to defeat localhost mixed content protection like this.
Comment 39 homakov 2019-01-24 07:18:25 PST
Happy to see this reopened. Safari has really been hitting many nerves with this unreasonable prohibition. It's been widely concluded there is no (new) security threat that otherwise wouldn't exist anyway.

What stops from implementing it soon? Who needs to approve this? It must be a one-line change.
Comment 40 Michael Catanzaro 2019-01-24 10:00:07 PST
I think this is probably a small change in MixedContentChecker::isMixedContent in Source/WebCore/loader/MixedContentChecker.cpp.

The challenge is going to be layout tests. First, the change requires a layout test of its own. But also, all our mixed content layout tests use an Apache server running on 127.0.0.1, so all those tests would break if we fix this. I think, since we'd probably be allowing 127.0.0.1 and ::1 but not localhost, as per the spec, perhaps we could switch the URIs for all the existing mixed content tests to use localhost to verify that mixed content blocking still applies to localhost, and a new test for this bug could use 127.0.0.1 and ::1 to verify that the mixed content checks don't apply to the loopback addresses.

P.S. If anyone is interested in contributing -- remember WebKit is an open source project after all -- see https://webkit.org/contributing-code/ for tips. Changes can be approved by any reviewer, though since this is a controversial issue we'd seek consensus first.
Comment 41 Michael Catanzaro 2019-01-24 10:04:09 PST
BTW the tests are in LayoutTests/http/tests/security/mixedContent. For example, in LayoutTests/http/tests/security/mixedContent/resources/frame-with-insecure-image.html, we could try changing this:

<img src="http://127.0.0.1:8080/security/resources/compass.jpg">

(which would be broken by this change), to this:

<img src="http://localhost:8080/security/resources/compass.jpg">

(which should still be blocked).
Comment 42 Michael Catanzaro 2019-01-24 10:07:48 PST
Hm, I've spent about two minutes looking at the spec, but it does say:

If origin’s host component is "localhost" or falls within ".localhost", and the user agent conforms to the name resolution rules in [let-localhost-be-localhost], return "Potentially Trustworthy".

So... plan probably foiled.
Comment 43 Michael Catanzaro 2019-01-25 08:19:57 PST
I guess we'll need a new setting just for use by tests, and a TestController message to enable/disable it for testing purposes.
Comment 44 Rob McVey 2019-04-05 16:06:27 PDT
Thanks for reopening this issue. Just saw the release notes for Safari 12.1 and it reminded me to check on the status of this. Any updates that can be provided on this issue? I see that it's still unassigned. I for one would really appreciate it if this could be prioritized.

Thanks again!
Comment 45 antoine 2019-10-16 16:08:11 PDT
I'll echo the previous comment. Any progress on this will be greatly appreciated.
Comment 46 antoine 2019-10-18 10:25:56 PDT
Michael Catanzaro: I see that SecurityOrigin.cpp has this 

// FIXME: Ensure that localhost resolves to the loopback address. 

in

bool SecurityOrigin::isLocalHostOrLoopbackIPAddress(StringView host)

I would suggest that the fix to this bug not tackle "localhost" resolution but focus on the loopback address, and a separate bug be filed for localhost.

In that context, the fix would only be changing the function MixedContentChecker::isMixedContent line 62:

return !SecurityOrigin::isSecure(url);

to

return !(SecurityOrigin::isSecure(url) || SecurityOrigin::isLoopbackIPAddress(url));

Modifications to tests would involve replacing 127.0.0.1 to localhost at the appropriate places (which would then be modified as necessary as part of a separate bug to tackle localhost rules).

Would a fix with those changes be acceptable?
Comment 47 Michael Catanzaro 2019-10-19 07:27:03 PDT
(In reply to antoine from comment #46)
> Michael Catanzaro: I see that SecurityOrigin.cpp has this 
> 
> // FIXME: Ensure that localhost resolves to the loopback address. 
> 
> in
> 
> bool SecurityOrigin::isLocalHostOrLoopbackIPAddress(StringView host)
> 
> I would suggest that the fix to this bug not tackle "localhost" resolution
> but focus on the loopback address, and a separate bug be filed for localhost.

In fact, the FIXME is not fixable at the WebKit level. DNS resolution is performed by platform libraries. In the case of WebKitGTK and WPE, that's done by GIO, which we just fixed in https://gitlab.gnome.org/GNOME/glib/merge_requests/616. For Mac, probably either CoreFoundation or perhaps the system resolver, not sure. It would be appropriate to replace the FIXME with a comment indicating that WebKit assumes localhost is always really localhost.

(In reply to antoine from comment #46)
> In that context, the fix would only be changing the function
> MixedContentChecker::isMixedContent line 62:
> 
> return !SecurityOrigin::isSecure(url);
> 
> to
> 
> return !(SecurityOrigin::isSecure(url) ||
> SecurityOrigin::isLoopbackIPAddress(url));

Nice investigation!
 
> Modifications to tests would involve replacing 127.0.0.1 to localhost at the
> appropriate places (which would then be modified as necessary as part of a
> separate bug to tackle localhost rules).

I'm not sure if it will be that easy. E.g. this change will likely break all the mixed content tests. I think we will just need to have a setting that tests can use to choose which behavior they get. See my suggestion in comment #43

> Would a fix with those changes be acceptable?

I *believe* we have consensus on this change at this point, so as long as there's a new test and it doesn't break old tests, I think so. Seems clear that the test work will be harder than the change itself.
Comment 48 Michael Catanzaro 2019-10-19 07:28:38 PDT
(In reply to Michael Catanzaro from comment #47)
>It would be
> appropriate to replace the FIXME with a comment indicating that WebKit
> assumes localhost is always really localhost.

Of course, it would be a good idea for someone familiar with macOS or iOS to check what really happens on Apple platforms before doing so.
Comment 49 antoine 2019-10-19 23:04:17 PDT
> > Modifications to tests would involve replacing 127.0.0.1 to localhost at the
> > appropriate places (which would then be modified as necessary as part of a
> > separate bug to tackle localhost rules).
> 
> I'm not sure if it will be that easy. E.g. this change will likely break all
> the mixed content tests. I think we will just need to have a setting that
> tests can use to choose which behavior they get. See my suggestion in
> comment #43

Makes sense - i actually got things to work by swapping 127.0.0.1 for localhost in the mixed content tests (along with the string in the expected result) but i guess the testcontroller is a cleaner approach. I'll give it a shot in a separate branch. Thanks Michael!
Comment 50 Michael Catanzaro 2019-10-20 10:03:10 PDT
(In reply to antoine from comment #49)
> Makes sense - i actually got things to work by swapping 127.0.0.1 for
> localhost in the mixed content tests (along with the string in the expected
> result) but i guess the testcontroller is a cleaner approach. I'll give it a
> shot in a separate branch. Thanks Michael!

Oh, so you chose to whitelist only 127.0.0.1 and ::1, and not also localhost. In that case, modifying TestController is of course not required.

If you want to whitelist localhost as well -- which I expect is desired -- then you will need to add a TestController setting to make the tests pass.

But it's also fine to start out by whitelisting 127.0.0.1 and ::1, and leave localhost for a follow-up patch.

(In reply to antoine from comment #46)
> In that context, the fix would only be changing the function
> MixedContentChecker::isMixedContent line 62:
> 
> return !SecurityOrigin::isSecure(url);
> 
> to
> 
> return !(SecurityOrigin::isSecure(url) ||
> SecurityOrigin::isLoopbackIPAddress(url));

Actually, it would be better to change SecurityOrigin::isSecure directly instead, since loopback can be trusted for all purposes, not just mixed content checking.
Comment 51 antoine 2019-10-20 12:31:12 PDT
(In reply to Michael Catanzaro from comment #50)
> Oh, so you chose to whitelist only 127.0.0.1 and ::1, and not also
> localhost. In that case, modifying TestController is of course not required.
> 
> If you want to whitelist localhost as well -- which I expect is desired --
> then you will need to add a TestController setting to make the tests pass.
> 
> But it's also fine to start out by whitelisting 127.0.0.1 and ::1, and leave
> localhost for a follow-up patch.

Sounds good - that's the approach i'm more comfortable with as i'm not certain of the implications of whitelisting localhost (see https://www.w3.org/TR/secure-contexts/#localhost "Given that uncertainty, this document errs on the conservative side by special-casing 127.0.0.1, but not localhost.").


> Actually, it would be better to change SecurityOrigin::isSecure directly
> instead, since loopback can be trusted for all purposes, not just mixed
> content checking.

Makes sense - will make the modification.

This should allow all present tests to pass. In terms of new tests - should we duplicate all of the mixed-content tests to check for 127.0.0.1 / ::1 or have only one test for that specific use case?
Comment 52 antoine 2019-10-21 15:29:12 PDT
Michael - i have a patch ready to go. Insights on any new tests to add would be appreciated as this is my first contribution to Webkit. Thanks!
Comment 53 Mike West 2019-10-21 21:50:03 PDT
(In reply to antoine from comment #51)
> (In reply to Michael Catanzaro from comment #50)
> > Oh, so you chose to whitelist only 127.0.0.1 and ::1, and not also
> > localhost. In that case, modifying TestController is of course not required.
> > 
> > If you want to whitelist localhost as well -- which I expect is desired --
> > then you will need to add a TestController setting to make the tests pass.
> > 
> > But it's also fine to start out by whitelisting 127.0.0.1 and ::1, and leave
> > localhost for a follow-up patch.
> 
> Sounds good - that's the approach i'm more comfortable with as i'm not
> certain of the implications of whitelisting localhost (see
> https://www.w3.org/TR/secure-contexts/#localhost "Given that uncertainty,
> this document errs on the conservative side by special-casing 127.0.0.1, but
> not localhost.").

Note that this changed several years ago: https://w3c.github.io/webappsec-secure-contexts/#localhost is the current text, which relies upon https://tools.ietf.org/html/draft-ietf-dnsop-let-localhost-be-localhost-02 to lock down `*.localhost` (which I wasn't able to successfully get through DNSOP, but which Chrome and Firefox implement).

AFAIK, Chrome and Firefox both treat explicit loopback in the forms `127.0.0.1`, `[::1]`, and `*.localhost` as secure contexts.

For tests that needed to distinguish the difference (and so that we can test multiple origins more clearly), we map arbitrary addresses (e.g. `http://layout.test/`) to loopback: see https://cs.chromium.org/chromium/src/content/shell/app/shell_main_delegate.cc?rcl=dfa06417b4ff75a88202b0359fa914212f52e7b0&l=262. That might be an approach that could work for WebKit as well?
Comment 54 Michael Catanzaro 2019-10-22 05:27:03 PDT
(In reply to antoine from comment #51)
> This should allow all present tests to pass. In terms of new tests - should
> we duplicate all of the mixed-content tests to check for 127.0.0.1 / ::1 or
> have only one test for that specific use case?

Well some developers might prefer duplicating all the tests in order to be thorough, but honestly I think that would create more maintenance effort than actual value. So after you've switched from 127.0.0.1 to localhost in the tests, I would duplicate only one really basic test, say insecure-image-in-main-frame.html, call it insecure-image-in-loopback-main-frame.html, and verify that the content is not blocked when using 127.0.0.1 instead of localhost. IMO the one test should suffice.

Then we should create a follow-up bug to consider *.localhost as a secure context as well (which requires verifying that it is indeed secure when using the Cocoa and curl network backends, as it now is for the soup backend), since that's what Mike is clearly suggesting that we do, and that's what Firefox and Chrome already do. Of course, bonus points if you want to go all the way and do it this way initially, but not required IMO.
Comment 55 Mike West 2019-10-22 05:41:50 PDT
> that's what Mike is clearly suggesting that we do, and that's what Firefox and Chrome already do.

For clarity, Mike is suggesting that y'all first implement the restrictions in https://tools.ietf.org/html/draft-ietf-dnsop-let-localhost-be-localhost-02 such that `localhost` and `*.localhost` always resolve to loopback, and never hit the internet (see https://cs.chromium.org/chromium/src/net/dns/host_resolver_manager.cc?rcl=905e57ccac6951efcfbc514fe33839c6ede4fee2&l=2751 for example). I expect this would require CFNetwork changes for macOS, and might not be trivially implementable right away.

I don't think it's safe to treat `localhost` or `*.localhost` as secure contexts without that set of restrictions in place, as it's very unlikely that developers (or users!) understand that those names might resolve out to the internet in some cases.
Comment 56 antoine 2019-10-22 07:19:19 PDT
(In reply to Michael Catanzaro from comment #54
> Well some developers might prefer duplicating all the tests in order to be
> thorough, but honestly I think that would create more maintenance effort
> than actual value. So after you've switched from 127.0.0.1 to localhost in
> the tests, I would duplicate only one really basic test, say
> insecure-image-in-main-frame.html, call it
> insecure-image-in-loopback-main-frame.html, and verify that the content is
> not blocked when using 127.0.0.1 instead of localhost. IMO the one test
> should suffice.

Sounds good, thanks for the feedback.

> Then we should create a follow-up bug to consider *.localhost as a secure
> context as well (which requires verifying that it is indeed secure when
> using the Cocoa and curl network backends, as it now is for the soup
> backend), since that's what Mike is clearly suggesting that we do, and
> that's what Firefox and Chrome already do. Of course, bonus points if you
> want to go all the way and do it this way initially, but not required IMO.

Agreed, fixing loopback as a first step is risk-free and will address the pains everyone expressed in this thread. Mike brought some great points and I’ll let a more experienced developer tackle localhost.

I just ran the entire regression suite though and it seems like we can’t avoid a TestController for some tests that rely on 127.0.0.1 to be insecure and need a cross-domain origin from localhost. This testcontroller will anyways be useful the day localhost becomes trusted as well.
Comment 57 Christiaan Goossens 2020-05-26 12:53:07 PDT
Hi, we are using this to connect with a user installed application that runs a local websocket to stream some data that they entered in the webbrowser to their app on request. This currently works in all browsers (as it should per spec) by connecting to ws://127.0.0.1:[port].

Currently this is broken in the latest version of Safari. What's the status on this bug report? Will Webkit (and Safari) start following the Mixed-Content spec on this issue? Let me know. Thanks in advance.
Comment 58 Christiaan Goossens 2020-05-26 12:54:31 PDT
(In reply to c.goossens from comment #57)
> Hi, we are using this to connect with a user installed application that runs
> a local websocket to stream some data that they entered in the webbrowser to
> their app on request. This currently works in all browsers (as it should per
> spec) by connecting to ws://127.0.0.1:[port].
> 
> Currently this is broken in the latest version of Safari. What's the status
> on this bug report? Will Webkit (and Safari) start following the
> Mixed-Content spec on this issue? Let me know. Thanks in advance.

At least, to be clear, can we allow requests to 127.0.0.1 and ::1?
Comment 59 Michael Catanzaro 2020-05-26 13:37:37 PDT
(In reply to Mike West from comment #55)
> > that's what Mike is clearly suggesting that we do, and that's what Firefox and Chrome already do.
> 
> For clarity, Mike is suggesting that y'all first implement the restrictions
> in
> https://tools.ietf.org/html/draft-ietf-dnsop-let-localhost-be-localhost-02
> such that `localhost` and `*.localhost` always resolve to loopback, and
> never hit the internet (see
> https://cs.chromium.org/chromium/src/net/dns/host_resolver_manager.
> cc?rcl=905e57ccac6951efcfbc514fe33839c6ede4fee2&l=2751 for example). I
> expect this would require CFNetwork changes for macOS, and might not be
> trivially implementable right away.
> 
> I don't think it's safe to treat `localhost` or `*.localhost` as secure
> contexts without that set of restrictions in place, as it's very unlikely
> that developers (or users!) understand that those names might resolve out to
> the internet in some cases.

Of course, with four different network backends, it's hard to guarantee they have all done this, and newer versions of WebKit are certain to be used against older network backends... but we've implemented this guarantee in GResolver a year ago, at your suggestion, so libsoup-based ports should be good: https://gitlab.gnome.org/GNOME/glib/-/merge_requests/616

(In reply to c.goossens from comment #57)
> Hi, we are using this to connect with a user installed application that runs
> a local websocket to stream some data that they entered in the webbrowser to
> their app on request. This currently works in all browsers (as it should per
> spec) by connecting to ws://127.0.0.1:[port].
> 
> Currently this is broken in the latest version of Safari. What's the status
> on this bug report? Will Webkit (and Safari) start following the
> Mixed-Content spec on this issue? Let me know. Thanks in advance.

Nobody is working on this bug report, at least last we heard from Antoine was comment #56. But from comment #46, we see the actual code change here is a one-liner, so this is really just blocked on tests. Maybe review the comments above to see previous discussion on what's needed for the tests. I think we have consensus to accept this change, but the patch would need to include a test and ensure existing tests don't break.
Comment 60 Christiaan Goossens 2020-05-26 13:53:11 PDT
Thank you Michael for your quick response! I hope I have bumped this bugreport and someone can try the quick one-liner fix (and edit/run the tests).

If that's not possible, I will see if we can make time in a sprint to submit a contribution to the project. I don't have any experience with the Webkit codebase however, so I would rather avoid that.
Comment 61 antoine 2020-05-26 14:07:16 PDT
(In reply to Christiaan Goossens from comment #60)
> Thank you Michael for your quick response! I hope I have bumped this
> bugreport and someone can try the quick one-liner fix (and edit/run the
> tests).
> 
> If that's not possible, I will see if we can make time in a sprint to submit
> a contribution to the project. I don't have any experience with the Webkit
> codebase however, so I would rather avoid that.

I had made the fix and tested that it works a while back. Unfortunately i went down the wrong path for the tests, and as Michael mentioned, adapting all the tests to make sure they function correctly is actually a significant job. I had started but never found time to finish. Still on my todo list but would be happy if someone beats me to it.
Comment 62 Alexey Proskuryakov 2020-05-26 17:21:42 PDT
With the recent reports of websites fingerprinting machines using loopback connections, it seems even more obvious that this is a technique that needs to be at least restricted.
Comment 63 homakov 2020-05-26 17:58:52 PDT
There is no danger in access to 127.0.0.1 as it was shown many times in this thread. To sum it up:

1. all other browsers behave correctly, Safari doesn’t 

2. no serious attack was found that works on other browsers but doesnt in Safari

3. fingerprinting loopback services such as Redis or Mongodb is rather pointless against 99% users. The developers are powerusers and can handle it. 

4. fingerprinting has nothing to do with the subject (https to localhost connection). You still can perfectly access localhost from http:// websites. Allowing it from https:// changes nothing. 

5. the only reason Mixed content warnings exist is to prevent spoofing or leaking dat to a middle man, which is not the case when http:// points to 127.0.0.1 and not a remote server. 

There is no middle man between my browser and localhost. Or is there?
Comment 64 Alexey Proskuryakov 2020-05-26 18:19:20 PDT
The fact that localhost connections need to be restricted is not what this bug is about. But it’s relevant context.
Comment 65 homakov 2020-05-26 18:51:09 PDT
Gotta love “the fact” part without any argumentation. Me and a few other people gave argumentations above.

No, “fingerprinting threat” and other scriptkiddy attacks do not count as valid reasons to stop a range of use cases of browsers talking to 127.0.0.1 _some way, without forging self-signed SSL certs_

One more observation: you are the *only* person against. Please name any other person here who sees blocking localhost as the only solution, given to us as “a fact”? Not single one, I’m afraid.
Comment 66 antoine 2020-05-26 19:09:40 PDT
Gents we’ve beaten this dead horse. Let’s just get it done.
Comment 67 Michael Catanzaro 2020-05-27 06:02:55 PDT
(In reply to Alexey Proskuryakov from comment #62)
> With the recent reports of websites fingerprinting machines using loopback
> connections, it seems even more obvious that this is a technique that needs
> to be at least restricted.

I agree that's something we probably need to start thinking about, because we have agreed anti-fingerprinting is a priority for WebKit that may take precedence over web compat. But it really has nothing to do with mixed content. Mixed content checks are not a good anti-fingerprinting measure because they can be trivially circumvented by using an http:// URI as the main resource rather than an https:// URI. A rule like "only localhost URIs may access localhost" might make sense to propose in another bug, but even if we do that, the mixed content behavior should still be changed; i.e. there's no need to display a security warning when https://127.0.0.1 loads content from http://127.0.0.1.
Comment 68 Alexey Proskuryakov 2020-05-27 10:11:09 PDT
Of course. We just need to implement the restrictions first, then look into this.
Comment 69 Dustin Nielson 2020-06-10 20:19:13 PDT
These are just a few comments and observations about this bug.

First there have been several discussions about localhost vs loopback that are outside the scope of this bug so let’s just not address that issue here as it’s going to involve OS level changes before the level of validation comes to the level that it can be trusted to be the exact same as loopback.

The actual bug:

Content from loopback addresses (e.g. 127.0.0.1) should not be considered mixed content

I think a more appropriate description would be something like:

“Safari mixed content handling does not comply with web specifications. “ 

As this more closely describes the actual issue and in my opinion is a critical bug as any change that doesn’t follow specifications is potentially a breaking bug as in this case.

There is talk of needing to put restrictions in place before this fix can be implemented.  I’m here to argue that fixes for this should not be a Safari specific implementation as it does not solve the potentially described problems for any other browser.  My recommendation is those types of changes should be addressed at the application level with entitlements etc… So the fix is universal in nature and solves any potential issues for all browsers.

Lastly let's examine the goal that the changed behaviour from spec was trying to address which was to potentially increase security.

The actual outcome of this implementation is that web sites are having to recommend using essentially any other browser other than Safari and most likely posting something like this on the page:

“We see you’re trying to access this page from Safari.  Unfortunately Safari is currently out of web specifications and has limited our ability to support Safari.  Please download (list of any other browser) to use this site as intended.”

Leading the user to use another browser and bypassing the intended goal this bug was trying to resolve and bypassing all the other great features that Safari does support.

This has led to references of Safari being the Internet Explorer of the modern age.  In fact Internet Explorer extended web specifications and didn’t break them.  It just added proprietary functionality that when implemented by web sites broke other browsers from being able to access the site but allowed I.E. to render (for the most part) any web spec compliant web site.

I would love to see this bug resolved asap as I’m close to releasing an application that requires this functionality and I’d much rather push people to upgrade outdated Safari to one that has become spec compliant again vs telling my users they need to just use another browser.
Comment 70 Frédéric Wang (:fredw) 2020-07-20 08:57:06 PDT
Hi. There are a lot of comments in this discussion, I'd just like to try to summarize things and see be sure I understand the situation.

== Bug content ==

This bug is about not treating loopback adresses as mixed content:

(1) Required by the specification: 127.0.0.1 and ::1
(2) Optional: localhost and *.localhost

In addition for (2)

(3) the spec adds the restriction that browsers must ensure they don't resolve to a non-loopback address.

== Positions of people ==

* IIUC Mozilla and Chromium developers implemented (1)+(2)+(3)
* Several users expressed their support for (1)+(2).
* Maintainers of WebKit Linux ports (at least Michael, but I'm personally in favor too) expressed their support too for aligning with the spec.
* Some maintainers of WebKit macOS/iOS ports find the proposed change sensible/ok (Youenn and Maciej) others expressed concerns (Brent and Alexey). Can you please elaborate whether these concerns apply to both (1)+(2) or just (2)? Also, would they be addressed by implementing (3) or do you think the current specs are still too lax and WebKit should keep departing from them?

== Development ==

* Michael and Antonio have investigated this a bit (thanks!). Are you still actively working on this? Do you have patches to share?
* Tests are likely to break. We can workaround this for (1) by relying on localhost instead of 127.0.0.1 but at the end we will still need a better solution when we implement (2) such as the one Michael sketched.
* Implementing the restriction (3) might require change in low-level libraries. This is already done in GLib but for proprietary ports like macOS/iOS this will be up to Apple to handle it. However gecko and chromium developers implemented this, so it seems they could still be done at the web engine level?
Comment 71 Frédéric Wang (:fredw) 2020-07-20 09:19:31 PDT
(In reply to Frédéric Wang (:fredw) from comment #70)
> * IIUC Mozilla and Chromium developers implemented (1)+(2)+(3)

Actually, Mozilla does not do (2)+(3) it seems, there is a WIP patch here: https://bugzilla.mozilla.org/show_bug.cgi?id=1220810
Comment 72 Michael Catanzaro 2020-07-20 09:27:46 PDT
(In reply to Frédéric Wang (:fredw) from comment #70)
> Hi. There are a lot of comments in this discussion, I'd just like to try to
> summarize things and see be sure I understand the situation.
> 
> == Bug content ==
> 
> This bug is about not treating loopback adresses as mixed content:
> 
> (1) Required by the specification: 127.0.0.1 and ::1
> (2) Optional: localhost and *.localhost
> 
> In addition for (2)
> 
> (3) the spec adds the restriction that browsers must ensure they don't
> resolve to a non-loopback address.

All correct, yes.

> * Some maintainers of WebKit macOS/iOS ports find the proposed change
> sensible/ok (Youenn and Maciej) others expressed concerns (Brent and
> Alexey). Can you please elaborate whether these concerns apply to both
> (1)+(2) or just (2)? Also, would they be addressed by implementing (3) or do
> you think the current specs are still too lax and WebKit should keep
> departing from them?

Alexey wants to block non-localhost origins from loading any content from localhost, and he wants that to happen *before* we fix this issue. That's a controversial change that app developers are not going to like. Those concerns apply to both (1)+(2), and implementing (3) would not address them at all, because the goal there is to prevent websites from accessing localhost at all.

(In reply to Frédéric Wang (:fredw) from comment #70)
> * Michael and Antonio have investigated this a bit (thanks!). Are you still
> actively working on this? Do you have patches to share?

No, but the required code change for (1) is a one-liner (see comment #46). It's just tests that need work.

> * Tests are likely to break. We can workaround this for (1) by relying on
> localhost instead of 127.0.0.1 but at the end we will still need a better
> solution when we implement (2) such as the one Michael sketched.

I think it will need a new TestController setting, yes. Shouldn't be hard.

> * Implementing the restriction (3) might require change in low-level
> libraries. This is already done in GLib but for proprietary ports like
> macOS/iOS this will be up to Apple to handle it. However gecko and chromium
> developers implemented this, so it seems they could still be done at the web
> engine level?

Well Firefox and Chromium control their own network stack, including their own custom DNS resolution. WebKit uses the system network stack, so improvements necessarily have to happen there first. It's not architecturally possible for WebKit to guarantee (3); instead, we just have to assume that the system network stack implements the behavior we want. Currently that is true for the libsoup backend, but not true for Apple. I'm not sure if it's true for the curl backend, but since we haven't checked to see, we should assume not.

Now, we could modify DNSResolveQueue to resolve localhost ourselves instead of passing that to the system network stack, but that change would only apply to prefetching. WebKit's actual resource loads are still going to use the system network stack, and changing that would not be desirable. So yes, (2) is blocked on Apple, because only Apple can fix (3). We can do (1), though.
Comment 73 Maciej Stachowiak 2020-07-29 00:04:49 PDT
We have the ability to add hooks to know whether `localhost` resolves to a loopback address. We are gathering opinions from folks at Apple on the topic that is the main subject of this bug.
Comment 74 Tim Perry 2020-10-09 06:08:39 PDT
Any updates on this? Was there a conclusion from the opinion gathering at Apple?

I still really need this supported for my application, and right now I still have to tell all Safari users to use a different browser.
Comment 75 Tim Perry 2020-10-22 02:00:38 PDT
Firefox just announced (https://groups.google.com/g/mozilla.dev.platform/c/sZdEYTiEBdE/m/PbGpLjcqAQAJ) their intent to ship a fix for this for 'localhost' (127.0.0.1 is already treated as trusted, according to https://bugzilla.mozilla.org/show_bug.cgi?id=903966).

The Firefox bug for the new localhost fix is here: https://bugzilla.mozilla.org/show_bug.cgi?id=1488740. There's more feedback & use cases listed there that may be of interest.
Comment 76 Frédéric Wang (:fredw) 2020-10-22 02:14:14 PDT
I don't know the status of the internal discussion at Apple, but speaking for Igalia, we plan to go back and check the status/plan again soon... and after we address the Firefox case.
Comment 77 Frédéric Wang (:fredw) 2020-10-28 07:57:27 PDT
I had tried to summarize my understanding in comment 70, but just to be more specific about the spec:

mixed content is defined here:
https://w3c.github.io/webappsec-mixed-content/#mixed-content

which relies on a priori authenticated URL:
https://w3c.github.io/webappsec-mixed-content/#a-priori-authenticated-url

which itself excludes "Potentially Trustworthy" URLs:
https://w3c.github.io/webappsec-secure-contexts/#is-url-trustworthy

which itself includes the definition here:
https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy

4. includes loopback IP addresses e.g. 127.0.0.1 or ::1 (mentioned here in bug 171934)
5. includes domain names "localhost" and "*.localhost" (mentioned here but also in bug 160504)

Point 5. is a MAY conditioned by "ensuring that localhost never resolves to a non-loopback address" see https://w3c.github.io/webappsec-secure-contexts/#localhost

Finally DNS resolution of "localhost" and "*.localhost" rely on https://tools.ietf.org/html/draft-west-let-localhost-be-localhost-06#section-3 ; not sure exactly how that applies to WebKit but point 2. requires that applications treat "localhost" and "*.localhost" themselves (without passing by APIs or libraries) while point 3. requires the same for APIs and libraries (without passing by recursive DNS servers).

Firefox and Chromium has treated loopback IP addresses as potentially trustworthy for a while and now also treat "localhost" and "*.localhost" specially (they implemented the special case for the resolution of these domains).
Comment 78 Maciej Stachowiak 2020-10-29 10:41:29 PDT
For reference, the system DNS resolver on Apple platforms does not necessarily guarantee that localhost maps to loopback, so we do need to figure out how to do the bypass. 

Not 100% sure how to do this via NSURLSession. I thought it might work to remap http://localhost/ to http://127.0.0.1/ before invoking the HTTP stack, but this will alter the Host header, which may alter behavior.
Comment 79 Michael Catanzaro 2020-10-29 11:40:37 PDT
(In reply to Frédéric Wang (:fredw) from comment #77)
> point 2. requires that
> applications treat "localhost" and "*.localhost" themselves (without passing
> by APIs or libraries) 

Note we currently have not implemented this for WPE/GTK, instead we just assume glib is new enough that the default GResolver will do what we want.
Comment 80 Frédéric Wang (:fredw) 2020-10-30 03:01:06 PDT
So I think we are more or less on the same page about the importance of this.

My proposal is to start with a simple approach:

* Introduce two TrustworthyLoopbackIPAddresses and TrustworthyLocalhostAddresses preferences that can be disabled for the sake of mixed content tests or if we can't guarantee the used APIs don't map localhost adresses to loopback.

* The TrustworthyLoopbackIPAddresses pref would do comment 46 i.e. point 4 of https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy

* The TrustworthyLocalhostAddresses pref would do point 5 of https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy

TrustworthyLoopbackIPAddresses would be enabled on all ports by default. For TrustworthyLocalhostAddresses, we can start as disabled by default and later conditionally enabled depending on the WebKit port or APIs.

Note that this won't be following point 2. of https://tools.ietf.org/html/draft-west-let-localhost-be-localhost-06#section-3 since we would delegating the resolution to the APIs, but I'm not sure there is an easy alternative for now, IIUC what Michael and Maciej said.
Comment 81 Frédéric Wang (:fredw) 2020-10-30 10:35:05 PDT
Created attachment 412772 [details]
WIP Patch
Comment 82 Frédéric Wang (:fredw) 2020-10-30 23:36:57 PDT
Created attachment 412836 [details]
WIP Patch
Comment 83 Frédéric Wang (:fredw) 2020-10-31 08:21:56 PDT
Created attachment 412843 [details]
WIP Patch
Comment 84 Frédéric Wang (:fredw) 2020-11-04 07:15:03 PST
Created attachment 413159 [details]
Patch
Comment 85 Frédéric Wang (:fredw) 2020-11-04 23:54:01 PST
Created attachment 413263 [details]
Patch
Comment 86 Titouan Rigoudy 2021-07-01 05:17:46 PDT
What's the status here?

In Chromium we'd like to start requiring that websites be delivered HTTPS in order to make requests to localhost, but that presupposes that http://localhost is not blocked as mixed content. See also crbug.com/1211244.

Also note that the Fetch spec has recently been updated to reflect let-localhost-be-localhost: https://github.com/whatwg/fetch/commit/d20e8c932a639d04fb8251eb6d647fcd54ef47fd
Comment 87 Frédéric Wang (:fredw) 2021-07-01 05:30:02 PDT
(In reply to Titouan Rigoudy from comment #86)
> What's the status here?

I've personally not made any progress since the experiments done on GTK/WPE ports. AFAIK, these are the current issues:

1) The notion of "secure context" or "potentially trustworthy" implemented in WebKit does not clearly match what is in the spec and is not necessarily unified in one place. This is a non-blocking issue, but it can be a bit confusing and does not help review.

2) Many existing tests in WebKit rely on the fact that localhost/loopback are currently NOT treated as secure in order to check mixed content and similar features. See bug 127676 comment 15.

3) There are concerns discussed in the thread above about exposing more "local stuff" to the web. It seems to me this is orthogonal to the issue here but (IIUC) some Apple devs think this should be addressed first. See also bug 218623.

4) It's not really clear we can follow Firefox/Chromium's approach (as described in the let-localhost-be-localhost spec) which is to make the browser override any DNS or library resolution and force the "localhost" host name to really correspond to a loopback address. See also comment 78

In order to work around these issues, I experimented with GTK/WPE first ( https://bugs.webkit.org/show_bug.cgi?id=218627 and https://bugs.webkit.org/show_bug.cgi?id=219257 ). In particular, for these platforms we are sure libsoup resolves "localhost" to a loopback address, so (4) is not a problem.

Regarding (3), it's possible that the "private network requests" spec you've been working on could help ; see bug 171934 comment 78. Maybe we should work on this first. But again, I can't speak for Apple...
Comment 88 Mike West 2021-07-01 07:09:53 PDT
+wilander@, since we talked about this on Twitter yesterday.
Comment 89 John Wilander 2021-07-01 07:15:12 PDT
(In reply to Titouan Rigoudy from comment #86)
> What's the status here?
> 
> In Chromium we'd like to start requiring that websites be delivered HTTPS in
> order to make requests to localhost, but that presupposes that
> http://localhost is not blocked as mixed content. See also crbug.com/1211244.
> 
> Also note that the Fetch spec has recently been updated to reflect
> let-localhost-be-localhost:
> https://github.com/whatwg/fetch/commit/
> d20e8c932a639d04fb8251eb6d647fcd54ef47fd

The status here is that what Chrome now wants to do is exactly what we argued should be done already in 2017 and Chrome, Firefox, and Edge didn’t want to do it at the time. We have always argued that the right step is to block localhost on non-localhost HTTP pages and only allow it on HTTPS pages or on pages from localhost itself.
Comment 90 Mike West 2021-07-01 07:19:40 PDT
I'm excited to hear that we have agreement on a path forward!
Comment 91 Titouan Rigoudy 2021-07-01 08:47:38 PDT
That's great! I think it indeed means we agree on both points:

1) https://example.com should be able to make subresource requests to http://localhost

2) http://example.com should not

This bug focuses on making #1 a reality in WebKit. Private Network Access focuses on fixing #2, but is experiencing difficulties because of this bug.

It sounds like the next step is for a reviewer to work with Frederic to land his patches?
Comment 92 John Wilander 2021-07-01 09:24:50 PDT
(In reply to Titouan Rigoudy from comment #91)
> That's great! I think it indeed means we agree on both points:
> 
> 1) https://example.com should be able to make subresource requests to
> http://localhost
> 
> 2) http://example.com should not
> 
> This bug focuses on making #1 a reality in WebKit. Private Network Access
> focuses on fixing #2, but is experiencing difficulties because of this bug.
> 
> It sounds like the next step is for a reviewer to work with Frederic to land
> his patches?

Nope. This may be a misunderstanding. For me and what I’ve communicated, these two go in tandem. We don’t want to “allow more localhost connections.” Allowing then on HTTPS pages without blocking them on HTTP is effectively “allow more localhost connections.” This is exactly why this isn’t moving. People come back and want 1) without 2) and that’s not the right move. That’s the 2017 disagreement all over again.
Comment 93 Alexander Nestorov 2021-07-01 09:47:52 PDT
The 2017 disagreement still exists because Webkit devs are failing to:

1) Understand that there are valid use cases for a website to want to communicate with localhost.
Luca Cipriani gave an excellent example of a valid use case. More on that in a few lines.

2) Failing to provide any alternative method / suggestions about how to overcome the problems originated from preventing this bug / request to be fixed.

You're sitting on the "no" position without trying to work with people which have valid reasons to be asking what they are asking.

More on "1": I could give you another argument, yet very similar to Luca's:
We're working on a webtool that should interact with a CLI tool running a server on localhost.
There is absolutely no reason whatsoever for the webtool to be sending user information to a
remote server so then the CLI tool can fetch that same data. It's literally less secure doing
so rather than just allowing the webtool to send requests directly to the CLI tool's localhost server.
And making users use our webtool on HTTP just defies any security practices, which is your
main argument for not honouring this bug/request in the first place.

To sum up, unless I've totally misunderstood this entire 4-years long conversation, you're suggestions so far are:

1) "Don't do that. I'm not sure why you want to do it, but don't do it because your use case is not valid."
2) "Serve your webtool over insecure HTTP, because we're concerned that allowing mixed content on requests to localhost from HTTPS context will be a security risk".
Comment 94 Michael Catanzaro 2021-07-01 10:16:17 PDT
(In reply to John Wilander from comment #89)
> The status here is that what Chrome now wants to do is exactly what we
> argued should be done already in 2017 and Chrome, Firefox, and Edge didn’t
> want to do it at the time. We have always argued that the right step is to
> block localhost on non-localhost HTTP pages and only allow it on HTTPS pages
> or on pages from localhost itself.

Blocking localhost access from http:// URIs seems entirely uncontroversial. There's no way that should continue to be allowed.

Alexey's previous position in this bug was that *all* access to localhost from non-localhost origins should be blocked. That's the main reason this bug was stalled. If Apple is indeed now willing to allow access from non-localhost https:// origins, then there's no longer any substantial disagreement here.
Comment 95 Titouan Rigoudy 2021-07-02 02:54:55 PDT
> Nope. This may be a misunderstanding. For me and what I’ve communicated, these two go in tandem. We don’t want to “allow more localhost connections.” Allowing then on HTTPS pages without blocking them on HTTP is effectively “allow more localhost connections.”

I see. I guess I fail to understand the reasoning behind this stance. 

I could understand it if shipping #1 presented real risks to WebKit's ability to ship #2. Then indeed, one would not want to paint oneself in a corner. I believe the opposite is true, however.

Shipping #1 (fixing this bug and aligning with the spec) would indeed effectively "allow more localhost connections", but only those you yourself want everyone to migrate towards. Web developers can then develop secure web applications that interact with loopback, and WebKit users' security is better off as a result. Reaching for a metaphor: digging a new, straighter canal will only reduce the flow through the existing riverbed you wish to seal off.

This then enables you to ship #2: you can then tell web developers to upgrade their web apps to HTTPS if they want to keep making requests to loopback. Otherwise, if you try to ship #2, a lot of angry web developers will inevitably tell you there is no viable alternative to the behavior you are deprecating.

I myself am trying to ship #2 in Chromium, where #1 has shipped for a while, and it already is hard enough compatibility-wise without this added hurdle. Please accept my cautionary tale!
Comment 96 Sam Sneddon [:gsnedders] 2021-11-11 07:09:58 PST
To try to summarise the disagreements here (I think bug 171934 comment 87 is a good summary of the technical aspects):

1. The Secure Context spec requires 127.0.0.1/8 and ::1/128 to be "Potentially Trustworthy", whereas in WebKit they are currently "Not Trustworthy".

1a. This means currently in WebKit http://example.com can run fetch("http://127.0.0.1") successfully, but https://example.com cannot, as in the latter case it is blocked as mixed content. Per spec both should succeed.

2. The Secure Context spec requires localhost to be "Potentially Trustworthy" *if the user agent conforms to the name resolution rules in [let-localhost-be-localhost]*, whereas in WebKit they are currently (unconditionally) "Not Trustworthy".

2a. To implement this correctly, WebKit needs to know whether the port-mediated DNS resolution layer (which, e.g., for Apple ports is in other system libraries) does conform to those rules. There are a variety of possible approaches here: either hardcoding knowledge of "this version-or-later of the library conforms", or the library exposing an API which tells us whether or not it does.

2b. This means currently in WebKit http://example.com can run fetch("http://localhost") successfully, but https://example.com cannot, as in the latter case it is blocked as mixed content. Per spec both should succeed.

3. We have seen a number of attacks (both security and privacy) against servers running on localhost, hence there's a strong incentive against extending the ability to access localhost from the public web. At the moment, it is possible to execute such attacks from sites served from HTTP, but this at least prevents the majority of page loads (which happen from HTTPS) from being able to exploit them.

3a. There are a number of proposals to mitigate this, most obviously the Private Network Access spec (https://wicg.github.io/private-network-access/) which only allows secure contexts to access private networks and only after a CORS preflight.

I _believe_ the case is that people are fine with making progress on (1) and/or (2) if-and-only-if mitigation to (3) happens (and I believe the case is that the general view is Private Network Access is sufficient mitigation), and what we don't want to do is land (1) and/or (2) _before_ mitigation to (3) lands. (Or, at the very least, don't want to land them with them enabled-by-default. One could imagine them all landing in parts behind flags and then ultimately flipping the flags all at once.)
Comment 97 John Wilander 2021-11-11 07:36:24 PST
That’s a good summary, except that I’d add that whatever specs say, we’ve been clear on this in the WebAppSec WG since when it was originally discussed. This is not a case of WebKit making excuses or coming up with explanations after the fact. History has also shown that we were right to suspect that *disabling* access to localhost from non-secure contexts would take a long time and not follow shortly if we just *enabled* access in secure contexts first.
Comment 98 Frédéric Wang (:fredw) 2021-12-10 02:10:19 PST
Removing myself from assignee since I'm not working on this anymore.
Comment 99 tobi 2022-09-04 01:41:21 PDT
Any updates on this? Seem to still be an issue today, in Safari
Comment 100 tobi 2022-09-04 01:49:39 PDT
Any updates on this? Is the only solution to this to proxy via a web server (just to support Safari) - I've resorted to doing this and it's super inefficient.
Comment 101 tobi 2022-09-04 01:53:50 PDT
How does Postman.com get around this issue on Mac OS? They run a local agent that sits behind a HTTP server at http://localhost:10533
Comment 102 Thomas Cannon 2022-10-11 09:54:48 PDT
Can we mark this as blocking 160504 as well? (https://bugs.webkit.org/show_bug.cgi?id=160504#add_comment)

With the release of Safari 16, and the general push for passkeys adoption, having a stable, no-nonsense development environment for websites & apps to locally develop their passkey implementation is crucial. Being able to use `*.localhost` subdomains makes it extremely easy in Chrome/Firefox, since they tag the requests as being in a Secure Context; enabling WebAuthn.
Comment 103 Jaime Rivas 2023-06-06 11:56:43 PDT
Hi everyone - now that Safari 17 Beta is out, are there any updates / planned work on this issue? Thank you!
Comment 104 hecker 2023-06-29 17:33:10 PDT
Brave Browser added this bug as a feature: https://brave.com/privacy-updates/27-localhost-permission/

> As mentioned, most other browsers do not significantly prevent websites from accessing localhost resources. The desktop versions of Firefox and Chrome allow both secure and insecure public sites to access localhost resources, and seem to intend to allow public secure sites to access localhost resources indefinitely.
> As a side-effect of security restrictions, Safari currently blocks requests to localhost resources (as do other WebKit browsers) from secure public websites. But to the best of our understanding, Safari does not explicitly intend to block these requests from public websites.
> As far as we can tell, Brave is the only browser that will block requests to localhost resources from both secure and insecure public sites, while still maintaining a compatibility path for sites that users trust (in the form of the discussed localhost permission).