Bug 179363 - iOS calling getUserMedia() again kills video display of first getUserMedia()
Summary: iOS calling getUserMedia() again kills video display of first getUserMedia()
Status: NEW
Alias: None
Product: WebKit
Classification: Unclassified
Component: WebRTC (show other bugs)
Version: Safari 11
Hardware: iPhone / iPad iOS 11
: P2 Normal
Assignee: Nobody
URL:
Keywords: InRadar
Depends on:
Blocks:
 
Reported: 2017-11-06 23:16 PST by Chad Phillips
Modified: 2018-08-31 12:11 PDT (History)
7 users (show)

See Also:


Attachments
Multiple getUserMedia() streams controlled by UI. (3.41 KB, text/html)
2017-11-07 14:58 PST, Chad Phillips
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Chad Phillips 2017-11-06 23:16:54 PST
Verified by the reference code below:

On iOS, a second call to getUserMedia() kills the display of a video stream obtained by an earlier call to getUserMedia(). The original stream displays fine until the subsequent getUserMedia() call, then goes black.

Note that this doesn't happen on Desktop Safari, only on iOS Safari in my tests.

Reference code:

<!DOCTYPE html>
<html>
  <body>
    <div>
      <video id="video1" autoplay playsinline></video>
    </div>
    <script type="text/javascript">
      var constraints1 = {
        audio: false,
        video: {
          height: {
            max: 480,
          },
          width: {
            max: 640,
          },
        },
      };
      navigator.mediaDevices.getUserMedia(constraints1).then(function(stream) {
        var video1 = document.getElementById('video1');
        video1.srcObject = stream;
      }).catch(function(err) {
        console.error("Device access checks failed: ", err, constraints1);
      });
      var constraints2 = {
        audio: false,
        video: true,
      };
      navigator.mediaDevices.getUserMedia(constraints2).then(function(stream) {
      }).catch(function(err) {
        console.error("Device access checks failed: ", err, constraints2);
      });
    </script>
  </body>
</html>
Comment 1 Chad Phillips 2017-11-07 14:58:03 PST
Created attachment 326267 [details]
Multiple getUserMedia() streams controlled by UI.

Attaching a more robust test case, allowing user triggering of multiple streams obtained by getUserMedia().

Both 'Video 1' and 'Video 2' can be started and stopped fine as long as the other is not actively streaming. But if you start one while another is streaming to screen, the first stream goes black.
Comment 2 Chad Phillips 2017-11-22 20:05:49 PST
I've spent some more time digging into this issue, and it turns out that the video MediaStreamTrack element of video 1 has its 'mute' property set to true upon another gUM call that requests a video stream.

It's not even necessary for this gUM call to do anything with the video stream (like display it) for the muting of the previous video MediaStreamTrack element.

Furthermore, I see no way via the API to unmute the muted video track -- the 'mute' property is read-only, and toggling the 'enabled' property of either video track has no effect on its state.

Is this issue related to the 'Multiple Simultaneous Audio or Video Streams' as noted at https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/Using_HTML5_Audio_Video/Device-SpecificConsiderations/Device-SpecificConsiderations.html ?

If so, it's going to be severely limiting for certain multiparty videoconferencing applications. For example:

 - It's common practice to show a user their own local video feed with one (higher resolution) stream, and publish another (lower resolution) stream to other users

 - To accommodate receivers with different bandwidth capabilities, a common practice is to publish both a high resolution and a low resolution stream
Comment 3 Chad Phillips 2018-04-23 21:14:27 PDT
This is still an issue with iOS 11.3, would love somebody to have a look at it.

IMO, it's unnecessarily limiting to block video after a user has granted access to the camera, as mentioned in the specific use cases in previous comments.

Limitations like this make it so that the in-browser WebRTC experience on iOS is the worst of any platform -- is that really what Apple wants??
Comment 4 daginge 2018-06-28 01:16:35 PDT
Just chiming in here. This is a blocker for us in switching between front and back camera on Safari iOS. At least with minimal disruption to the user experience.
Comment 5 youenn fablet 2018-06-29 11:01:08 PDT
>  - It's common practice to show a user their own local video feed with one
> (higher resolution) stream, and publish another (lower resolution) stream to
> other users

Understood that this is not optimal, although in most UI, the local video track usually takes a smaller part of the screen than the remote video track.
Also applyConstraints should be the solution for changing the resolution of a given video track.

At this point, it is not feasible to have two tracks with different resolutions. Ideally, this should be feasible using MediaStream cloning and applyConstraints.

Note that general support for multiple video streams might not always be feasible, in particular if the streams are coming from multiple capture devices.

>  - To accommodate receivers with different bandwidth capabilities, a common
> practice is to publish both a high resolution and a low resolution stream

Simulcast might be a better option there.

(In reply to daginge from comment #4)
> Just chiming in here. This is a blocker for us in switching between front
> and back camera on Safari iOS. At least with minimal disruption to the user
> experience.

I would be interested in what disruption you encounter.
The following is expected to work without too much trouble:

navigator.mediaDevices.getUserMedia({video: { facingMode: "user"} }).then((s) => {
   localVideo.srcObject = s;
   peerConnection.getSenders()[0].replaceTrack(s.getVideoTracks()[0]);
});
Comment 6 Chad Phillips 2018-06-30 17:23:24 PDT
> Understood that this is not optimal, although in most UI,
> the local video track usually takes a smaller part of the
> screen than the remote video track.

It's not the only rational UI choice though. I have a layout where all feeds, including the local user's feed, are the same size, and many users prefer this layout. Also, it doesn't seem like a very flexible architectural mindset to make that kind of assumption about how a designer wants to lay things out?

> Note that general support for multiple video streams might not
> always be feasible, in particular if the streams are coming from
> multiple capture devices.

I want to point out here that Chrome on Android has zero restrictions/limitations in this regard. You can call gUM multiple times, grab different resolutions, clone streams, etc., and it all works flawlessly. By comparison, iOS is a nightmare to work on for anything beyond the most basic use cases. Which also makes the end user experience worse on iOS because of all the compromises necessary. It's puzzling to me Apple implemented WebRTC at all if they're going to hamstring it.

> Simulcast might be a better option there.

Not all clients support simulcast. For example, Chrome doesn't yet support it for h264, which is of course the required codec if you want interop with iOS devices.
Comment 7 Chad Phillips 2018-08-31 10:24:29 PDT
Adding some further clarification from more testing:

1. This issue only occurs when a subsequent gUM() request asks for an already requested media type. For example, if gUM() #1 asks for video, and gUM() #2 also asks for video, gUM() #1's video stream is affected. However, if gUM() #2 only asks for audio, then gUM() #1's video stream is NOT affected.

2. Because the mechanism is setting the track's muted property, data is still sent along a peer connection, although it's not very useful since the other side only receives muted video.
Comment 8 Chad Phillips 2018-08-31 10:39:21 PDT
This issue also occurs for audio tracks.

I now believe the issue can be fully summarized as: If a getUserMedia() requests a media type requested in a previous getUserMedia(), the previously requested media track's 'muted' property is set to true, and there is no way to programmatically unmute it.
Comment 9 youenn fablet 2018-08-31 10:57:37 PDT
This is currently an expected behavior.
There seems to be two requests:
- Allow multiple capture using the same capture device with different parameters (resolution, frameRate...).
- Allow capture on two different capture devices at the same time.
Comment 10 Chad Phillips 2018-08-31 11:35:00 PDT
@youenn, those seem correct, and are supported by every other platform I've tried.
Comment 11 Radar WebKit Bug Importer 2018-08-31 12:11:49 PDT
<rdar://problem/43950488>