RESOLVED FIXED 61287
[UNIX] Use SOCK_SEQPACKET when available
https://bugs.webkit.org/show_bug.cgi?id=61287
Summary [UNIX] Use SOCK_SEQPACKET when available
Carlos Garcia Campos
Reported 2011-05-23 08:12:10 PDT
While implemeting the plugin process for the GTK port I noticed that when closing a page containing a plugin, the web process got blocked waiting for a sync message reply which was never sent because the plugin process was about to die. Since DGRAM sockets are not connection-oriented, when one end closes the connection the other end is not notifed. With STREAM sockets a 0-byte read happens when the other end closes the connection. So, the trivial fix would be just using STREAM instead of DGRAM, but in that case, sendmsg/recvmsg behaves a bit differently, there's not 1 to 1 relationship between sendmsg and recvmsg calls, we need to handle the message boundaries in the receiver. In Linux there's SOCK_SEQPACKET that does exactly what we want, it uses the same STREAM function but message boundaries in incoming datagrams are preserved, but it's Linux specific.
Attachments
Patch to use SOCK_STREAM (13.22 KB, patch)
2011-05-25 11:31 PDT, Carlos Garcia Campos
no flags
Updated patch fixing style issues (13.22 KB, patch)
2011-05-25 11:38 PDT, Carlos Garcia Campos
no flags
New patch using SOCK_SEQPACKET when available (14.39 KB, patch)
2011-05-26 01:59 PDT, Carlos Garcia Campos
no flags
Patch to use SEQPACKET when possible (3.36 KB, patch)
2011-05-26 09:01 PDT, Carlos Garcia Campos
darin: review+
Balazs Kelemen
Comment 1 2011-05-23 10:03:55 PDT
Performance should be taken into consideration as well. I'm not an expert of the topic but I can imagine that datagram sockets are faster because they are not reliable - which is not a problem for us because our IPC is happening locally. If this is the case I believe we should try to work around the problem if it is possible. Please clue me up if I'm wrong about performance.
Anders Carlsson
Comment 2 2011-05-23 10:08:59 PDT
FWIW, it looks like chromium uses SOCK_STREAM sockets.
Carlos Garcia Campos
Comment 3 2011-05-23 10:12:04 PDT
(In reply to comment #1) > Performance should be taken into consideration as well. Sure. > I'm not an expert of the topic but I can imagine > that datagram sockets are faster because they are not > reliable - which is not a problem for us because our IPC > is happening locally. If this is the case I believe we should try > to work around the problem if it is possible. > Please clue me up if I'm wrong about performance. I'm not an expert either, but yes, I guess DGRAM sockets should be faster. The only problem we have with DGRAM sockets is being notified when the other end closes the connection, which is not a frequent operation, so maybe we can continue using DGRAM sockets and add a new message only for unix ports to notify when the connection is closed.
Carlos Garcia Campos
Comment 4 2011-05-23 10:15:55 PDT
(In reply to comment #2) > FWIW, it looks like chromium uses SOCK_STREAM sockets. No, it seems they use SOCK_SEQPACKET in Linux, see http://src.chromium.org/viewvc/chrome/trunk/src/content/browser/zygote_host_linux.cc?view=markup
Carlos Garcia Campos
Comment 5 2011-05-23 10:16:55 PDT
(In reply to comment #4) > (In reply to comment #2) > > FWIW, it looks like chromium uses SOCK_STREAM sockets. > > No, it seems they use SOCK_SEQPACKET in Linux, see > > http://src.chromium.org/viewvc/chrome/trunk/src/content/browser/zygote_host_linux.cc?view=markup well, which are indeed STREAM sockets.
Carlos Garcia Campos
Comment 6 2011-05-25 11:31:36 PDT
Created attachment 94813 [details] Patch to use SOCK_STREAM This patch uses SOCK_STREAM instead of SOCK_DGRAM handling message boundaries in the receiver.
WebKit Review Bot
Comment 7 2011-05-25 11:33:23 PDT
Attachment 94813 [details] did not pass style-queue: Failed to run "['Tools/Scripts/check-webkit-style', '--diff-files', u'Source/WebKit2/ChangeLog', u'Source/WebKit..." exit_code: 1 Source/WebKit2/Platform/CoreIPC/unix/ConnectionUnix.cpp:327: Tests for true/false, null/non-null, and zero/non-zero should all be done without equality comparisons. [readability/comparison_to_zero] [5] Source/WebKit2/Platform/CoreIPC/unix/ConnectionUnix.cpp:334: Semicolon defining empty statement for this loop. Use { } instead. [whitespace/semicolon] [5] Total errors found: 2 in 4 files If any of these errors are false positives, please file a bug against check-webkit-style.
Carlos Garcia Campos
Comment 8 2011-05-25 11:38:51 PDT
Created attachment 94816 [details] Updated patch fixing style issues
Balazs Kelemen
Comment 9 2011-05-25 16:08:50 PDT
Could you do some kind of benchmark to compare the performance of stream IPC with datagram? Would you like me to do it?
Simon Hausmann
Comment 10 2011-05-25 22:49:10 PDT
(In reply to comment #0) > While implemeting the plugin process for the GTK port I noticed that when closing a page containing a plugin, the web process got blocked waiting for a sync message reply which was never sent because the plugin process was about to die. Since DGRAM sockets are not connection-oriented, when one end closes the connection the other end is not notifed. With STREAM sockets a 0-byte read happens when the other end closes the connection. So, the trivial fix would be just using STREAM instead of DGRAM, but in that case, sendmsg/recvmsg behaves a bit differently, there's not 1 to 1 relationship between sendmsg and recvmsg calls, we need to handle the message boundaries in the receiver. In Linux there's SOCK_SEQPACKET that does exactly what we want, it uses the same STREAM function but message boundaries in incoming datagrams are preserved, but it's Linux specific. I think we should also consult Kimmo about this, since he converted ConnectionQt.cpp to use DGRAM sockets.
Kimmo Kinnunen
Comment 11 2011-05-25 23:46:08 PDT
(In reply to comment #0) > While implemeting the plugin process for the GTK port I noticed that when closing a page containing a plugin, the web process got blocked waiting for a sync message reply which was never sent because the plugin process was about to die. So the problem here is not the usage of SOCK_DGRAM per se. AFAICS the real problem is that the web process or UI process is not able to observe plugin process crash. The original intention of the of using SOCK_DGRAM was two-fold: - it guarantees that the message is not buffered at any point redundantly (minimum latency) - it simplifies implementation greatly The original solution to the real problem in this bug report was to listen the child process crash by other means. The original reason of using SOCK_DGRAM instead of SOCK_SEQPACKET was to avoid discussion about Linux specificness of the original patch. I would advice not using SOCK_STREAM. The main problem I see is the lack of guarantees about buffering and need for receiver to preserve buffers, split messages correctly etc. Also the implementation gets quite complex, as you need to buffer the file descriptors. Anecdotal evidence of this complexity might be for example that the suggested patch leaks file descriptors when connection is closed while the fds are in buffer. (AFAICS from very quick inspection, sorry if I'm wrong). Also remember that Mac implementation is dgram-based, so it is very much easier to achieve similar performance characteristics, avoid unnecessary kludging in common parts and maybe even code reuse. My suggestion would be to (not that it should matter in this point): 1. Use SOCK_SEQPACKET on Linux, observe crashes with recvmsg == 0 2. ifdef Non-Linux, use platform specific way of observing child process crash 3. rename this bug 1. is needed only if you are perfectionist, otherwise only 2. suffices. To do 2. cross-platform, you can for example open dummy SOCK_STREAM during fork and make that communicate the crash of child process. I would expect this to have much less overhead than redundant buffering of should-not-be-buffered messages.
Carlos Garcia Campos
Comment 12 2011-05-26 00:18:04 PDT
(In reply to comment #11) > (In reply to comment #0) > > While implemeting the plugin process for the GTK port I noticed that when closing a page containing a plugin, the web process got blocked waiting for a sync message reply which was never sent because the plugin process was about to die. > > So the problem here is not the usage of SOCK_DGRAM per se. AFAICS the real problem is that the web process or UI process is not able to observe plugin process crash. Not exactly, we already detect when the child process crashes, and it works, the problem is when the child process closes the connection and finishes. In the case of the plugin process there's a timeout to finish the process. The plugin process closes the connection and the web process doesn't notice it, so it gets blocked waiting for the sync reply to DestroyPlugin message. If the plugin process finishes before the sync timeout, the web process is notified and it wakes up, otherwhise it wakes up after the sync timout. Either way the web process gets blocked for a while. So we want to be notified as soon as the connection is closed by the other end, no matter whether the process finishes or not. > The original intention of the of using SOCK_DGRAM was two-fold: > - it guarantees that the message is not buffered at any point redundantly (minimum latency) > - it simplifies implementation greatly Yeah, I agree. > The original solution to the real problem in this bug report was to listen the child process crash by other means. > > The original reason of using SOCK_DGRAM instead of SOCK_SEQPACKET was to avoid discussion about Linux specificness of the original patch. Yes, DGRAM and STREAM are more portable. SEQPACKET is exactly what we want, it uses DGRAM functions internally and we get notified when the other end closes the connection > I would advice not using SOCK_STREAM. The main problem I see is the lack of guarantees about buffering and need for receiver to preserve buffers, split messages correctly etc. Also the implementation gets quite complex, as you need to buffer the file descriptors. Anecdotal evidence of this complexity might be for example that the suggested patch leaks file descriptors when connection is closed while the fds are in buffer. (AFAICS from very quick inspection, sorry if I'm wrong). oops :-P > Also remember that Mac implementation is dgram-based, so it is very much easier to achieve similar performance characteristics, avoid unnecessary kludging in common parts and maybe even code reuse. Agree. > My suggestion would be to (not that it should matter in this point): > 1. Use SOCK_SEQPACKET on Linux, observe crashes with recvmsg == 0 > 2. ifdef Non-Linux, use platform specific way of observing child process crash > 3. rename this bug > > 1. is needed only if you are perfectionist, otherwise only 2. suffices. > > To do 2. cross-platform, you can for example open dummy SOCK_STREAM during fork and make that communicate the crash of child process. I would expect this to have much less overhead than redundant buffering of should-not-be-buffered messages. Wouldn't polling another socket be a performance issue too? Anyway, I'll try that way, using SEQPACKET when available and falling back to polling a dummy socket. Thank you very much for the comments. Btw, could you also take a look at patch attached to bug #60621, please?
Carlos Garcia Campos
Comment 13 2011-05-26 00:19:00 PDT
(In reply to comment #5) > (In reply to comment #4) > > (In reply to comment #2) > > > FWIW, it looks like chromium uses SOCK_STREAM sockets. > > > > No, it seems they use SOCK_SEQPACKET in Linux, see > > > > http://src.chromium.org/viewvc/chrome/trunk/src/content/browser/zygote_host_linux.cc?view=markup > > well, which are indeed STREAM sockets. I was wrong, it actually used DGRAM functions internally, see: http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=net/unix/af_unix.c;h=0722a25a3a33e6717b5e305d1cf0550b3abb9bb1;hb=HEAD#l1685
Carlos Garcia Campos
Comment 14 2011-05-26 01:59:28 PDT
Created attachment 94943 [details] New patch using SOCK_SEQPACKET when available Actually, I think falling back to STREAM sockets is much easier than using a dummy socket, because we can use the same code for both STREAM and SEQPACKET sockets. For SEQPACKET m_readBufferSize == messageLength && m_fileDescriptorsSize == messageInfo.attachmentCount() so it will never memmove.
Kimmo Kinnunen
Comment 15 2011-05-26 02:16:07 PDT
(In reply to comment #14) > Created an attachment (id=94943) [details] > New patch using SOCK_SEQPACKET when available > > Actually, I think falling back to STREAM sockets is much easier than using a dummy socket, because we can use the same code for both STREAM and SEQPACKET No you cannot, if you don't modify the current code. If you do modify the current code, as with this patch, then you don't accept the point about redundant complexity and possible regressions the patch causes.
Carlos Garcia Campos
Comment 16 2011-05-26 02:26:11 PDT
(In reply to comment #15) > (In reply to comment #14) > > Created an attachment (id=94943) [details] [details] > > New patch using SOCK_SEQPACKET when available > > > > Actually, I think falling back to STREAM sockets is much easier than using a dummy socket, because we can use the same code for both STREAM and SEQPACKET > > > No you cannot, if you don't modify the current code. > > If you do modify the current code, as with this patch, then you don't accept the point about redundant complexity and possible regressions the patch causes. If it's matter of code complexity, I don't think this patch is that different than current code though, wouldn't it be better to split this again and use QSocket/GSocket?, Using GSocket this code would be just a few lines, and I guess it would be the same for Qt.
Carlos Garcia Campos
Comment 17 2011-05-26 02:46:00 PDT
(In reply to comment #16) > (In reply to comment #15) > > (In reply to comment #14) > > > Created an attachment (id=94943) [details] [details] [details] > > > New patch using SOCK_SEQPACKET when available > > > > > > Actually, I think falling back to STREAM sockets is much easier than using a dummy socket, because we can use the same code for both STREAM and SEQPACKET > > > > > > No you cannot, if you don't modify the current code. > > > > If you do modify the current code, as with this patch, then you don't accept the point about redundant complexity and possible regressions the patch causes. > > If it's matter of code complexity, I don't think this patch is that different than current code though, wouldn't it be better to split this again and use QSocket/GSocket?, Using GSocket this code would be just a few lines, and I guess it would be the same for Qt. And the same code would work on windows too.
Kimmo Kinnunen
Comment 18 2011-05-26 02:51:12 PDT
(In reply to comment #16) > wouldn't it be better to split this again and use QSocket/GSocket?, Using GSocket this code would be just a few lines, and I guess it would be the same for Qt. You have to ask the person who introduced this to Gtk. I don't feel I'm the correct person to express opinions about Gtk port. As for Qt port: Qt socket API cannot provide the features needed, and it's anyway more robust to use the direct low-level API calls for such a low-level feature as IPC. The datagram based implementation is IMHO superior to the stream based implementation for the rationale given already in this bug entry. As such, again in my opinion, it would be sad choise to regress the current implementation.
Kimmo Kinnunen
Comment 19 2011-05-26 02:51:48 PDT
(In reply to comment #17) > And the same code would work on windows too. I don't think it's the correct solution to windows IPC.
Carlos Garcia Campos
Comment 20 2011-05-26 03:23:44 PDT
(In reply to comment #19) > (In reply to comment #17) > > And the same code would work on windows too. > > I don't think it's the correct solution to windows IPC. Yes, you are right, and I agree using low level apis might be more robust even if it's more code and I think it's good sharing code. When you say the message is buffered redundantly, what do you mean? Comparing current code and the patch we have the followingf heap allocations: - Current code: We allocate the readBuffer in platformInitialize(), then everytime readyReadHandler() is called + attachmentDescriptorBuffer is allocated/deallocated + attachmentSizes is allocated/deallocated + fileDescriptors is allocated/deallocated - With the patch: We allocate readBuffer and fileDescriptors buffers in platformInitialize(), then everytime readyReadHandler() is called: + attachmentDescriptorBuffer is allocated/deallocated in readBytesFromSocket() + attachmentSizes is allocated/deallocated in processMessage() So we are even reducing the number of allocations/deallocations. We could avoid also attachmentDescriptorBuffer allocation in both cases using alloca() if available.
Kimmo Kinnunen
Comment 21 2011-05-26 03:46:23 PDT
(In reply to comment #20) > When you say the message is buffered redundantly, what do you mean? I mean, SOCK_STREAM sockets are buffered. The socket API doesn't say when and what amount of data is sent. I also mean: Kernel does not guarantee anything regrading the message sizes. In other words: With stream socket, possible non-message-border-aligned data has to survive across your read buffer. Thus you need to buffer fractional messages in your application code. Quickly glancing to the diff I would say your patch doesn't take this into account, but again, sorry if I'm wrong.
Carlos Garcia Campos
Comment 22 2011-05-26 03:59:14 PDT
(In reply to comment #21) > (In reply to comment #20) > > When you say the message is buffered redundantly, what do you mean? > > I mean, SOCK_STREAM sockets are buffered. The socket API doesn't say when and what amount of data is sent. Yes, note that STREAM socket would only be used if SOCK_SEQPACKET is not available. > I also mean: > Kernel does not guarantee anything regrading the message sizes. > > In other words: > With stream socket, possible non-message-border-aligned data has to survive across your read buffer. Thus you need to buffer fractional messages in your application code. Yes, but we use the same buffer by simply moving the outstanding memory at the beginning of the buffer after processing every message, then we simply make sure to not read more memory that what is available in the buffer. You could also read more than one message with a single recvmsg() reducing the amount of syscalls. > Quickly glancing to the diff I would say your patch doesn't take this into account, but again, sorry if I'm wrong. Yes, it handles it for both the bytes read and the file descriptors received using a single buffer in both cases.
Kimmo Kinnunen
Comment 23 2011-05-26 05:07:11 PDT
(In reply to comment #22) > (In reply to comment #21) > > Quickly glancing to the diff I would say your patch doesn't take this into account, but again, sorry if I'm wrong. > > Yes, it handles it for both the bytes read and the file descriptors received using a single buffer in both cases. Ok, good. sorry. > > (In reply to comment #20) > > > When you say the message is buffered redundantly, what do you mean? > > > > I mean, SOCK_STREAM sockets are buffered. The socket API doesn't say when and what amount of data is sent. > > Yes, note that STREAM socket would only be used if SOCK_SEQPACKET is not available. First you say STREAM is not used.. > You could also read more than one message with a single recvmsg() reducing the amount of syscalls. ..then you argument justification of the complexity due to a STREAM feature? Look, I gotta stop right now wasting your and my own time. I'm not a reviewer, I don't have much to say about this issue and you don't need to listen to my argumentation. Certainly this is even more the case with Gtk port. To conclude my stand here: You are solving the problem of notifying of closing the connection. I don't think implementing entirely correct, arbitrary stream protocol parsing is the way to do it. The IPC is designed as a datagram protocol. Requiring 100% correct stream protocol implementation is insanity. I could think of few other, far simpler implementations for close notifications. One is the socket thing, which solves the crash problem too. "The orderly shutdown" scenario can be solved by also sending a simple message. For example, you can send datagram "KIMMO" to signify close, and that will work.
Carlos Garcia Campos
Comment 24 2011-05-26 05:27:42 PDT
(In reply to comment #23) > First you say STREAM is not used.. Yes, I meant it's used only as a fallback for non-linux systems. > > You could also read more than one message with a single recvmsg() reducing the amount of syscalls. > > ..then you argument justification of the complexity due to a STREAM feature? you are right, that comment is out of place. > Look, I gotta stop right now wasting your and my own time. I'm not a reviewer, I don't have much to say about this issue and you don't need to listen to my argumentation. Certainly this is even more the case with Gtk port. I appreciate your argumentation, the Qt port will have the same problem when plugin process is implemented, I'm just trying to find the better way to fix it. > > To conclude my stand here: You are solving the problem of notifying of closing the connection. I don't think implementing entirely correct, arbitrary stream protocol parsing is the way to do it. The IPC is designed as a datagram protocol. Requiring 100% correct stream protocol implementation is insanity. I agree, but this is only for the fallback case, what I was trying to say is that the same implementation that works for stream sockets, works for dgram sockets too without performance penalty. > I could think of few other, far simpler implementations for close notifications. One is the socket thing, which solves the crash problem too. I don't think the socket thing is that easy, we would need another pair of connected sockets, right? so we would need to keep two sockets per connection and close both when the connection finishes. But only when SOCK_SEQPACKET is not available. > "The orderly shutdown" scenario can be solved by also sending a simple message. For example, you can send datagram "KIMMO" to signify close, and that will work. That was my first proposal, and apple guys said: there must be another way to do it.
Kimmo Kinnunen
Comment 25 2011-05-26 05:40:05 PDT
(In reply to comment #24) > I appreciate your argumentation This doesn't really come across. I don't argument for fun, and this discussion is still going nowhere! > I don't think the socket thing is that easy, we would need another pair of connected sockets, right? so we would need to keep two sockets per connection and close both when the connection finishes. But only when SOCK_SEQPACKET is not available. I would expect this be "easier" than what you are currently doing. I haven't implemented it, but socketpair before fork gives you the sockets, and then you just observe them..
Carlos Garcia Campos
Comment 26 2011-05-26 05:47:47 PDT
(In reply to comment #25) > (In reply to comment #24) > > I appreciate your argumentation > > This doesn't really come across. I don't argument for fun, and this discussion is still going nowhere! I meant I don't think the discussion is going nowhere and knowing what other people think it's helpful for me to see how we can solve this in a way we all agree.
Simon Hausmann
Comment 27 2011-05-26 07:05:57 PDT
(In reply to comment #23) > To conclude my stand here: You are solving the problem of notifying of closing the connection. I don't think implementing entirely correct, arbitrary stream protocol parsing is the way to do it. The IPC is designed as a datagram protocol. Requiring 100% correct stream protocol implementation is insanity. > > I could think of few other, far simpler implementations for close notifications. One is the socket thing, which solves the crash problem too. "The orderly shutdown" scenario can be solved by also sending a simple message. For example, you can send datagram "KIMMO" to signify close, and that will work. I agree with Kimmo here. The current DGRAM socket based code is optimal for IPC and thus the common case. I don't think we should pessimize the common case just to cover the close notification with the same mechanism.
Carlos Garcia Campos
Comment 28 2011-05-26 08:55:15 PDT
Ok, there are two different things here: - Use SEQPACKET when available - How to implement the fallback when SEQPACKET is not available I think we agree on the first one, at least, so I'll split the patch and open a new bug report for the second one.
Carlos Garcia Campos
Comment 29 2011-05-26 09:01:01 PDT
Created attachment 94982 [details] Patch to use SEQPACKET when possible
Carlos Garcia Campos
Comment 30 2011-05-26 09:11:40 PDT
Just created a new bug report for the second issue: https://bugs.webkit.org/show_bug.cgi?id=61538
Martin Robinson
Comment 31 2011-05-26 09:21:28 PDT
Comment on attachment 94982 [details] Patch to use SEQPACKET when possible View in context: https://bugs.webkit.org/attachment.cgi?id=94982&action=review This approach seems optimal for systems with SOCKET_SEQPACKET. > Source/WebKit2/Platform/CoreIPC/unix/ConnectionUnix.cpp:196 > + // Connection has been closed. > + connectionDidClose(); > + return; > + } I think the method call here makes the comment redundant. > Source/WebKit2/UIProcess/Launcher/gtk/ProcessLauncherGtk.cpp:47 > +#ifdef SOCK_SEQPACKET > +#define SOCKET_TYPE SOCK_SEQPACKET > +#else > +#define SOCKET_TYPE SOCK_DGRAM > +#endif > + It's probably better to use SOCK_SEQPACKET directly until we have the fallback. People packaging for platforms that don't support SOCK_SEQPACKET will have some advance notice that we currently fail for them.
Carlos Garcia Campos
Comment 32 2011-05-26 09:29:13 PDT
(In reply to comment #31) > (From update of attachment 94982 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=94982&action=review > > This approach seems optimal for systems with SOCKET_SEQPACKET. > > > Source/WebKit2/Platform/CoreIPC/unix/ConnectionUnix.cpp:196 > > + // Connection has been closed. > > + connectionDidClose(); > > + return; > > + } > > I think the method call here makes the comment redundant. Right, I added the comment because it's not obvious that a 0-byte read means the other end closed the connection, but the method name makes it clear, so I'll remove the comment > > Source/WebKit2/UIProcess/Launcher/gtk/ProcessLauncherGtk.cpp:47 > > +#ifdef SOCK_SEQPACKET > > +#define SOCKET_TYPE SOCK_SEQPACKET > > +#else > > +#define SOCKET_TYPE SOCK_DGRAM > > +#endif > > + > > It's probably better to use SOCK_SEQPACKET directly until we have the fallback. People packaging for platforms that don't support SOCK_SEQPACKET will have some advance notice that we currently fail for them. Well, it will work with SOCK_DGRAM, it's just in a few corner cases that the web process might get blocked for a while.
Anders Carlsson
Comment 33 2011-05-26 10:14:15 PDT
One of the requirements of CoreIPC is that you should be able to know when the connection is closed. Just because a connection is closed doesn't mean that the process on the other end went away either, as is the case with the plug-in process. I'd also like to see Connection::setShouldCloseConnectionOnProcessTermination to be removed. When the process is terminated, the connection should know this and close automatically. Again, looking at the Chromium IPC: http://src.chromium.org/viewvc/chrome/trunk/src/ipc/ipc_channel_posix.cc?revision=82485&view=markup they are using a stream socket.
Balazs Kelemen
Comment 34 2011-05-27 16:27:16 PDT
My opinion (and not more) after reading the discussion. If we want to support systems without SOCK_SEQPACKET (and I guess we do) then we need to implement the fallback for SOCK_DGRAM. So, I don't think we should implement a path for SOCK_SEQPACKET at all (for systems where it is available). Furthermore, if SOCK_DGRAM is more effective than SOCK_SEQPACKET (I guess Kimmo and Simon believes that it is) than it is better to use SOCK_DGRAM with the fallback everywhere. Regarding to Anders's comment: I don't think it is a really serious problem that we have a quirk for platforms that implement CoreIPC with sightly different technique than what is used on Mac and Windows (and Chromium).
Kimmo Kinnunen
Comment 35 2011-05-29 23:10:25 PDT
(In reply to comment #34) > My opinion (and not more) after reading the discussion. If we want to support systems without SOCK_SEQPACKET (and I guess we do) then we need to implement the fallback for SOCK_DGRAM. > So, I don't think we should implement a path for SOCK_SEQPACKET at all (for systems where it is available). I don't really understand these two comments. There's no separate codepath for SOCK_SEQPACKET. The main codepath for SEQPACKET will be identical to DGRAM, eg. you can just switch the socket type and it will work. The difference with it is that with SEQPACKET you can observe the closing of the stream. And here is where you need the fallback if DGRAM is used. And the fallback can be an extra dummy stream socket. > Furthermore, if SOCK_DGRAM is more effective than SOCK_SEQPACKET (I guess Kimmo and Simon believes that it is) than it is better to use SOCK_DGRAM with the fallback everywhere. No, this is wrong interpretation. I never said that there would be a perf difference in SEQPACKET vs. DGRAM. In my comments "datagram" is used to define the type of the message handling, as in as opposed to "stream". IOW: SEQPACKET == datagram.
Balazs Kelemen
Comment 36 2011-05-30 01:39:56 PDT
I see.In this case it seems like the approach of the patch is good for everyone.
Darin Adler
Comment 37 2011-10-17 12:39:28 PDT
Comment on attachment 94982 [details] Patch to use SEQPACKET when possible View in context: https://bugs.webkit.org/attachment.cgi?id=94982&action=review >>> Source/WebKit2/Platform/CoreIPC/unix/ConnectionUnix.cpp:196 >>> + } >> >> I think the method call here makes the comment redundant. > > Right, I added the comment because it's not obvious that a 0-byte read means the other end closed the connection, but the method name makes it clear, so I'll remove the comment A comment you could leave in would be "// A zero byte read indicates that the other end closed the connection."
Carlos Garcia Campos
Comment 38 2011-10-18 01:47:29 PDT
Note You need to log in before you can comment on or make changes to this bug.