In 64-bit Windows, "unsigned" means a 32-bit unsigned integer, "size_t" means a 64-bit unsigned integer, and "unsigned long" means a 32-bit unsigned integer. Visual Studio is having trouble deciding whether to use std::min<unsigned> or std::min<size_t> in X86Assembler.h. Here's a simple fix.
Created attachment 229080 [details] Patch
Comment on attachment 229080 [details] Patch r=me
Comment on attachment 229080 [details] Patch Clearing flags on attachment: 229080 Committed r167108: <http://trac.webkit.org/changeset/167108>
All reviewed patches have been landed. Closing bug.
Comment on attachment 229080 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=229080&action=review > Source/JavaScriptCore/assembler/X86Assembler.h:2276 > - unsigned nopSize = std::min(size, 15UL); > + unsigned nopSize = std::min<unsigned>(size, 15UL); Once we add <unsigned>, then there is no point in having the UL there any more. The L is particularly silly, since it gives us an unsigned long constant for no good reason. Also, this function won’t work for any sizes that are greater than the maximum unsigned. It probably should either take an argument of type unsigned or work properly for large values of type size_t. Taking a size_t, and malfunctioning, seems like a strange choice.
(In reply to comment #5) > Also, this function won’t work for any sizes that are greater than the maximum unsigned. It probably should either take an argument of type unsigned or work properly for large values of type size_t. Taking a size_t, and malfunctioning, seems like a strange choice. I was shooting for a non-invasive fix, and I was assuming that size wouldn't get anywhere near ULONG_MAX. Do you think it would be worth switching all the unsigned types to size_t? I'm a big fan of using size_t unless a different type is needed, but I also don't work on JavaScriptCore much and don't want to muck up their code.
You're right, Darin. A fix is in https://bugs.webkit.org/show_bug.cgi?id=131615