<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<!DOCTYPE bugzilla SYSTEM "https://bugs.webkit.org/page.cgi?id=bugzilla.dtd">

<bugzilla version="5.0.4.1"
          urlbase="https://bugs.webkit.org/"
          
          maintainer="admin@webkit.org"
>

    <bug>
          <bug_id>63767</bug_id>
          
          <creation_ts>2011-06-30 16:12:51 -0700</creation_ts>
          <short_desc>Remove the concept of &quot;being wedged&quot; from new-run-webkit-tests</short_desc>
          <delta_ts>2022-02-27 23:43:23 -0800</delta_ts>
          <reporter_accessible>1</reporter_accessible>
          <cclist_accessible>1</cclist_accessible>
          <classification_id>1</classification_id>
          <classification>Unclassified</classification>
          <product>WebKit</product>
          <component>New Bugs</component>
          <version>528+ (Nightly build)</version>
          <rep_platform>Unspecified</rep_platform>
          <op_sys>Unspecified</op_sys>
          <bug_status>RESOLVED</bug_status>
          <resolution>FIXED</resolution>
          
          
          <bug_file_loc></bug_file_loc>
          <status_whiteboard></status_whiteboard>
          <keywords></keywords>
          <priority>P2</priority>
          <bug_severity>Normal</bug_severity>
          <target_milestone>---</target_milestone>
          
          
          <everconfirmed>1</everconfirmed>
          <reporter name="Adam Barth">abarth</reporter>
          <assigned_to name="Adam Barth">abarth</assigned_to>
          <cc>dpranke</cc>
    
    <cc>eric</cc>
          

      

      

      

          <comment_sort_order>oldest_to_newest</comment_sort_order>  
          <long_desc isprivate="0" >
    <commentid>430783</commentid>
    <comment_count>0</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 16:12:51 -0700</bug_when>
    <thetext>Remove the concept of &quot;being wedged&quot; from new-run-webkit-tests</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430785</commentid>
    <comment_count>1</comment_count>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-06-30 16:14:14 -0700</bug_when>
    <thetext>Why?</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430786</commentid>
    <comment_count>2</comment_count>
      <attachid>99391</attachid>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 16:14:47 -0700</bug_when>
    <thetext>Created attachment 99391
Patch</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430787</commentid>
    <comment_count>3</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 16:15:53 -0700</bug_when>
    <thetext>From the ChangeLog:

        Worker processes shouldn&apos;t ever become wedged.  My understanding is
        that this code was originally motivated by the old threading-based
        design but no longer servers any purpose.

        Note: If we actually have a problem with the test harness getting
        stuck, buildbot will kill us.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430788</commentid>
    <comment_count>4</comment_count>
      <attachid>99391</attachid>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-06-30 16:16:17 -0700</bug_when>
    <thetext>Comment on attachment 99391
Patch

It is still possible for workers to wedge up, and they should not hang the overall test run. I&apos;m R-&apos;ing this unless you can explain why we don&apos;t need this concept any more.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430789</commentid>
    <comment_count>5</comment_count>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-06-30 16:17:34 -0700</bug_when>
    <thetext>(In reply to comment #3)
&gt; From the ChangeLog:
&gt; 
&gt;         Worker processes shouldn&apos;t ever become wedged.  My understanding is
&gt;         that this code was originally motivated by the old threading-based
&gt;         design but no longer servers any purpose.
&gt;

This is incorrect. Workers can still get wedged, it just happens much less often.
 
&gt;         Note: If we actually have a problem with the test harness getting
&gt;         stuck, buildbot will kill us.

When this happens, you get much less useful output from the test run, and it takes a lot longer. Plus, it&apos;s harder to diagnose what happened.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430793</commentid>
    <comment_count>6</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 16:19:15 -0700</bug_when>
    <thetext>&gt; This is incorrect. Workers can still get wedged, it just happens much less often.

How does this happen?  How often does it happen?</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430796</commentid>
    <comment_count>7</comment_count>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-06-30 16:23:14 -0700</bug_when>
    <thetext>(In reply to comment #6)
&gt; &gt; This is incorrect. Workers can still get wedged, it just happens much less often.
&gt; 
&gt; How does this happen?  How often does it happen?

In the chromium port, in particular, the test timeout is enforced by DRT itself, rather than by the python code. If DRT wedges and doesn&apos;t self-timeout, the worker will wedge also.

It doesn&apos;t seem to happen that often, but I&apos;ve definitely seen it happen.

It is possible that if you fix the Chromium code and we can safely assert that workers are never wedging then this code will no longer be necessary. I personally think it&apos;s still a good defense-in-depth that makes the tool more robust.

Also, I&apos;ll note that a few days ago you were asking about --run-singly and run_in_another_thread. If you got rid of this code, that would be yet another reason to have to keep run_in_another_thread around. I&apos;d rather get rid of run_in_another thread(), since the manager is a more appropriate place for this sort of logic.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430809</commentid>
    <comment_count>8</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 16:28:30 -0700</bug_when>
    <thetext>I just checked the last 100 builds on Webkit Mac10.6 (deps) and this condition has not occurred.  Would you like me to check the last thousand or would you like me to check another bot?</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430813</commentid>
    <comment_count>9</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 16:29:04 -0700</bug_when>
    <thetext>(To be clear, that was the Chromium bot)</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430817</commentid>
    <comment_count>10</comment_count>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-06-30 16:31:08 -0700</bug_when>
    <thetext>(In reply to comment #9)
&gt; (To be clear, that was the Chromium bot)

I would definitely want to check Chromium Win as well, and I would feel better if we could say that we hadn&apos;t seen this for a couple weeks. 

That said, I believe that this feature should stay in the tool, period.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430820</commentid>
    <comment_count>11</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 16:32:02 -0700</bug_when>
    <thetext>&gt; That said, I believe that this feature should stay in the tool, period.

To be clear, you believe the feature should stay in the tool regardless of whether the condition ever occurs?</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430823</commentid>
    <comment_count>12</comment_count>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-06-30 16:36:52 -0700</bug_when>
    <thetext>(In reply to comment #11)
&gt; &gt; That said, I believe that this feature should stay in the tool, period.
&gt; 
&gt; To be clear, you believe the feature should stay in the tool regardless of whether the condition ever occurs?

Not exactly. I believe that the feature exists in order to keep the tool from deadlocking and hanging. The way the tool is currently written, deadlock is otherwise possible. If you can convince me that deadlock is not otherwise possible, I would be more open to removing it.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430832</commentid>
    <comment_count>13</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 16:40:42 -0700</bug_when>
    <thetext>This hasn&apos;t happened in the 100 most recent builds on Chromium Windows either.

So, we&apos;ve established that if this condition does ever occur, it occurs with at least probably less than 0.5 percent.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430837</commentid>
    <comment_count>14</comment_count>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-06-30 16:43:15 -0700</bug_when>
    <thetext>(In reply to comment #13)
&gt; This hasn&apos;t happened in the 100 most recent builds on Chromium Windows either.
&gt; 
&gt; So, we&apos;ve established that if this condition does ever occur, it occurs with at least probably less than 0.5 percent.

I&apos;m not trying to be a nag here; comment #12 best explains why I think this feature should stay in the tool. 

That said, you are assuming a uniform distribution of events and that past performance predicts future performance.

If someone introduces a patch into Chromium DRT that hangs it, this will occur 100% of the time.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430841</commentid>
    <comment_count>15</comment_count>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-06-30 16:44:51 -0700</bug_when>
    <thetext>(In reply to comment #14)
&gt; (In reply to comment #13)
&gt; &gt; This hasn&apos;t happened in the 100 most recent builds on Chromium Windows either.
&gt; &gt; 
&gt; &gt; So, we&apos;ve established that if this condition does ever occur, it occurs with at least probably less than 0.5 percent.
&gt; 
&gt; I&apos;m not trying to be a nag here; comment #12 best explains why I think this feature should stay in the tool. 
&gt; 
&gt; That said, you are assuming a uniform distribution of events and that past performance predicts future performance.
&gt; 
&gt; If someone introduces a patch into Chromium DRT that hangs it, this will occur 100% of the time.

Oh, and in addition, this code protects not only against currently executing (hopefully correctly written code) but code that may be written in the future. E.g., you may find as you are getting the apple win port working, or porting this to work with some new webkit port, that your DRTs will hang on occasion, at which point this code would once again be useful.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430848</commentid>
    <comment_count>16</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 16:48:02 -0700</bug_when>
    <thetext>If we need to kill DRT, we can do that in the worker process.  If the worker process has a bug that causes it to hang, we can fix that bug.  In neither case to do we need to redundant layer of timeout.  It&apos;s not helping anyone.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430853</commentid>
    <comment_count>17</comment_count>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-06-30 16:52:32 -0700</bug_when>
    <thetext>(In reply to comment #16)
&gt; If we need to kill DRT, we can do that in the worker process.  If the worker process has a bug that causes it to hang, we can fix that bug.  In neither case to do we need to redundant layer of timeout.  It&apos;s not helping anyone.

No, you can&apos;t kill it in the worker process, because the worker process is likely blocked inside port-specific code in run_test(). 

If the worker process has a bug that causes it to hang, it is true that you can fix that bug, but in the meantime the tool will be hanging and bots and users will be less happy than they would be otherwise.

So, in both cases the redundant layer of timeouts makes the tool more reliable and robust.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430854</commentid>
    <comment_count>18</comment_count>
      <attachid>99391</attachid>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 16:53:10 -0700</bug_when>
    <thetext>Comment on attachment 99391
Patch

I&apos;ve now checked every build from the Chromium Windows (deps) build that exist.  Thie condition has never occurred.  Our estimate is now that it occurs less than twice per thousand runs.  This code does not appear to be useful.  If this becomes a problem in the future, we can add the code back.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430858</commentid>
    <comment_count>19</comment_count>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-06-30 17:00:26 -0700</bug_when>
    <thetext>This probably won&apos;t convince you, but I will also point out that this code would be useful to anyone who was using the --worker-model=threads feature on a box with Python 2.5 (which would happen if we were running on Leopard or Ubuntu Hardy).

However, you have already also suggested that that code should be removed as well :).</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430987</commentid>
    <comment_count>20</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 22:54:25 -0700</bug_when>
    <thetext>&gt; This probably won&apos;t convince you, but I will also point out that this code would be useful to anyone who was using the --worker-model=threads feature on a box with Python 2.5 (which would happen if we were running on Leopard or Ubuntu Hardy).

That code is gone now.  Hopefully we won&apos;t need to run the tests on Hardy anymore since we&apos;ve moved to Lucid.

The anti-wedging code also causes problems when workers legitimately need to block for moderate periods of time.  For example, on Mac, we want to run /usr/bin/sample on tests that time out in order to understand why they time out.  The the sample tool effectively blocks for a while in order to capture stack traces from DRT.

By removing the anti-wedging code, we give the worker process more rope, which it can use to hang itself or to do some useful work.  Given that we can&apos;t seem to find an example of a worker wedging in the 500 runs we&apos;ve examined, it seems like the balance might lie with giving the worker some more rope at this time.

These decisions are all very reversible.  If we run into trouble with the test suite timing out and getting killed by buildbot, we can address that problem.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430988</commentid>
    <comment_count>21</comment_count>
    <who name="Eric Seidel (no email)">eric</who>
    <bug_when>2011-06-30 23:00:39 -0700</bug_when>
    <thetext>(In reply to comment #20)
&gt; The anti-wedging code also causes problems when workers legitimately need to block for moderate periods of time.  For example, on Mac, we want to run /usr/bin/sample on tests that time out in order to understand why they time out.  The the sample tool effectively blocks for a while in order to capture stack traces from DRT.

Are there easy ways we should be communicating back to the master that we&apos;re sampling/crash-reporting/whatever?  Is it correct division of duty for the worker to be doing these tasks?</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430989</commentid>
    <comment_count>22</comment_count>
      <attachid>99391</attachid>
    <who name="Eric Seidel (no email)">eric</who>
    <bug_when>2011-06-30 23:02:57 -0700</bug_when>
    <thetext>Comment on attachment 99391
Patch

I think it&apos;s OK to remove this stuff, given that I remember it being added back in the threading days.  I agree that wedges today are bugs in the workers and can be fixed.

It isn&apos;t entirely clear to me if sampling in the worker is the correct decision, but I assume you can enlighten me there.

It&apos;s also not entirely clear to me if NRWT or DRT manages the test hang timeout.  I believe in ORWT both of them have watchdog timers, I&apos;m not sure what the status in NRWT is.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430990</commentid>
    <comment_count>23</comment_count>
      <attachid>99391</attachid>
    <who name="Eric Seidel (no email)">eric</who>
    <bug_when>2011-06-30 23:03:13 -0700</bug_when>
    <thetext>Comment on attachment 99391
Patch

I think it&apos;s OK to remove this stuff, given that I remember it being added back in the threading days.  I agree that wedges today are bugs in the workers and can be fixed.

It isn&apos;t entirely clear to me if sampling in the worker is the correct decision, but I assume you can enlighten me there.

It&apos;s also not entirely clear to me if NRWT or DRT manages the test hang timeout.  I believe in ORWT both of them have watchdog timers, I&apos;m not sure what the status in NRWT is.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430991</commentid>
    <comment_count>24</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 23:05:18 -0700</bug_when>
    <thetext>&gt; It isn&apos;t entirely clear to me if sampling in the worker is the correct decision, but I assume you can enlighten me there.

There are two reason to do it in the worker:

1) The worker is the one that knows the process ID of DRT, which is necessary to run sample.
2) The worker is responsible for adding all the output files to the results directory, and the hang report is one such file.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430992</commentid>
    <comment_count>25</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 23:06:36 -0700</bug_when>
    <thetext>&gt; Are there easy ways we should be communicating back to the master that we&apos;re sampling/crash-reporting/whatever?  Is it correct division of duty for the worker to be doing these tasks?

The master doesn&apos;t read hardly any information from the workers.  He just stuffs the channel with tasks and then waits for the workers to spin down.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430996</commentid>
    <comment_count>26</comment_count>
    <who name="Eric Seidel (no email)">eric</who>
    <bug_when>2011-06-30 23:08:59 -0700</bug_when>
    <thetext>(In reply to comment #7)
&gt; (In reply to comment #6)
&gt; &gt; &gt; This is incorrect. Workers can still get wedged, it just happens much less often.
&gt; &gt; 
&gt; &gt; How does this happen?  How often does it happen?
&gt; 
&gt; In the chromium port, in particular, the test timeout is enforced by DRT itself, rather than by the python code. If DRT wedges and doesn&apos;t self-timeout, the worker will wedge also.

This sounds like a bug.  I believe in ORWT both DRT and ORWT have timers.  I guess that&apos;s what this wedge code is about.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430997</commentid>
    <comment_count>27</comment_count>
    <who name="Eric Seidel (no email)">eric</who>
    <bug_when>2011-06-30 23:10:58 -0700</bug_when>
    <thetext>(In reply to comment #25)
&gt; &gt; Are there easy ways we should be communicating back to the master that we&apos;re sampling/crash-reporting/whatever?  Is it correct division of duty for the worker to be doing these tasks?
&gt; 
&gt; The master doesn&apos;t read hardly any information from the workers.  He just stuffs the channel with tasks and then waits for the workers to spin down.

I see.  So he never reads back any state from the workers?

A design whereby the workers were considered &quot;trusted&quot; parts of the architecture and responsible for makign sure that they themselves didn&apos;t wedge makes sense to me.  I remember this wedging code being very necessary back when we couldn&apos;t trust that threads wouldn&apos;t wedge.

If we&apos;re removing this, what DRT-wedge-prevention code do we have left in NRWT?</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>430998</commentid>
    <comment_count>28</comment_count>
    <who name="Eric Seidel (no email)">eric</who>
    <bug_when>2011-06-30 23:13:12 -0700</bug_when>
    <thetext>(In reply to comment #24)
&gt; 2) The worker is responsible for adding all the output files to the results directory, and the hang report is one such file.

(I added a FIXME in one of my patches that we should be sampling from the worker instad of the ServerProcess, since it seems silly for the ServerProcess to know anything about the port or the results directory.)  It does make sense that the worker would be the one responsible for stuffing the results directory though.

I&apos;m not sure that ServerProcess makes sense on its own, given how much knowledge it needs about DRT (in terms of how to deal with stdout and stderr).   Right now the only other client for ServerProcess (besides DRT) is ImageDiff.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>431000</commentid>
    <comment_count>29</comment_count>
      <attachid>99391</attachid>
    <who name="Eric Seidel (no email)">eric</who>
    <bug_when>2011-06-30 23:26:29 -0700</bug_when>
    <thetext>Comment on attachment 99391
Patch

I understand your desire to remove this.  I does feel like the wrong layer to put this trust boundary now that we&apos;re not using threaded workers.  Can you please file a follow-up bug about handling the hung DRT case in the workers?  (Assuming we don&apos;t have timeout code in the worker like dirk was alluding to before?)</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>431002</commentid>
    <comment_count>30</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 23:28:58 -0700</bug_when>
    <thetext>Done: https://bugs.webkit.org/show_bug.cgi?id=63784</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>431004</commentid>
    <comment_count>31</comment_count>
    <who name="Adam Barth">abarth</who>
    <bug_when>2011-06-30 23:34:58 -0700</bug_when>
    <thetext>Committed r90207: &lt;http://trac.webkit.org/changeset/90207&gt;</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>431370</commentid>
    <comment_count>32</comment_count>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-07-01 13:01:38 -0700</bug_when>
    <thetext>(In reply to comment #22)
&gt; (From update of attachment 99391 [details])
&gt; I think it&apos;s OK to remove this stuff, given that I remember it being added back in the threading days.  I agree that wedges today are bugs in the workers and can be fixed.
&gt; 
&gt; It isn&apos;t entirely clear to me if sampling in the worker is the correct decision, but I assume you can enlighten me there.
&gt; 

Partially sampling in the worker is the right thing to do because you get increased parallelism that way, and partially it is fallout from how the code is currently designed.

&gt; It&apos;s also not entirely clear to me if NRWT or DRT manages the test hang timeout.  I believe in ORWT both of them have watchdog timers, I&apos;m not sure what the status in NRWT is.

Chromium&apos;s DRT enforces the test hang timeout directly. the other DRTs, as far as I know, do not (we don&apos;t even pass the timeout to the other DRTs).

This was before my time, so I don&apos;t know exactly why this decision was made, but it may have been prompted by the fact that non-blocking I/O is annoying to do in python generally and significantly harder on Windows; it may not have worked at all in Python 2.4 on XP, for example.

The lack of NBIO is a big part of the reason the current apple win port doesn&apos;t work, since the webkit.py/server_process.py implementation relies on it.

The work I did a few months ago convinced me that it was actually doable on windows, though, and I never got around to it. Once someone implements non-blocking i/o, it becomes more feasible to say that the workers can properly enforce the timeout and won&apos;t get wedged.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>431376</commentid>
    <comment_count>33</comment_count>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-07-01 13:04:47 -0700</bug_when>
    <thetext>(In reply to comment #26)
&gt; (In reply to comment #7)
&gt; &gt; (In reply to comment #6)
&gt; &gt; &gt; &gt; This is incorrect. Workers can still get wedged, it just happens much less often.
&gt; &gt; &gt; 
&gt; &gt; &gt; How does this happen?  How often does it happen?
&gt; &gt; 
&gt; &gt; In the chromium port, in particular, the test timeout is enforced by DRT itself, rather than by the python code. If DRT wedges and doesn&apos;t self-timeout, the worker will wedge also.
&gt; 
&gt; This sounds like a bug.  I believe in ORWT both DRT and ORWT have timers.  I guess that&apos;s what this wedge code is about.

There are comments in ORWT that DRT has an internal 30 second timeout (I have no idea if that works or not). WebKitTestRunner can take a timeout command line argument.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>431381</commentid>
    <comment_count>34</comment_count>
    <who name="Dirk Pranke">dpranke</who>
    <bug_when>2011-07-01 13:08:31 -0700</bug_when>
    <thetext>(In reply to comment #27)
&gt; (In reply to comment #25)
&gt; &gt; &gt; Are there easy ways we should be communicating back to the master that we&apos;re sampling/crash-reporting/whatever?  Is it correct division of duty for the worker to be doing these tasks?
&gt; &gt; 
&gt; &gt; The master doesn&apos;t read hardly any information from the workers.  He just stuffs the channel with tasks and then waits for the workers to spin down.
&gt; 
&gt; I see.  So he never reads back any state from the workers?
&gt; 

The manager gets the list of test failures and other data about each test run (like the amount of time it takes to run the test). It uses that to (a) tell the user when tests fail, (b) compute the aggregate statistics and results about the test run, and (c) generate the results files.


&gt; A design whereby the workers were considered &quot;trusted&quot; parts of the architecture and responsible for makign sure that they themselves didn&apos;t wedge makes sense to me.  I remember this wedging code being very necessary back when we couldn&apos;t trust that threads wouldn&apos;t wedge.
&gt; 
&gt; If we&apos;re removing this, what DRT-wedge-prevention code do we have left in NRWT?

You will be relying solely on the workers&apos; ability to keep themselves from being wedged. I found having two layers of checking much more reliable and it seems like an entirely sensible design to me given that the manager is busy tracking other information about the workers as well (like whether they are done or not).</thetext>
  </long_desc>
      
          <attachment
              isobsolete="0"
              ispatch="1"
              isprivate="0"
          >
            <attachid>99391</attachid>
            <date>2011-06-30 16:14:47 -0700</date>
            <delta_ts>2022-02-27 23:43:23 -0800</delta_ts>
            <desc>Patch</desc>
            <filename>bug-63767-20110630161446.patch</filename>
            <type>text/plain</type>
            <size>8681</size>
            <attacher name="Adam Barth">abarth</attacher>
            
              <data encoding="base64">SW5kZXg6IFRvb2xzL0NoYW5nZUxvZwo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSBUb29scy9DaGFuZ2VMb2cJKHJl
dmlzaW9uIDkwMTc4KQorKysgVG9vbHMvQ2hhbmdlTG9nCSh3b3JraW5nIGNvcHkpCkBAIC0xLDMg
KzEsMjEgQEAKKzIwMTEtMDYtMzAgIEFkYW0gQmFydGggIDxhYmFydGhAd2Via2l0Lm9yZz4KKwor
ICAgICAgICBSZXZpZXdlZCBieSBOT0JPRFkgKE9PUFMhKS4KKworICAgICAgICBSZW1vdmUgdGhl
IGNvbmNlcHQgb2YgImJlaW5nIHdlZGdlZCIgZnJvbSBuZXctcnVuLXdlYmtpdC10ZXN0cworICAg
ICAgICBodHRwczovL2J1Z3Mud2Via2l0Lm9yZy9zaG93X2J1Zy5jZ2k/aWQ9NjM3NjcKKworICAg
ICAgICBXb3JrZXIgcHJvY2Vzc2VzIHNob3VsZG4ndCBldmVyIGJlY29tZSB3ZWRnZWQuICBNeSB1
bmRlcnN0YW5kaW5nIGlzCisgICAgICAgIHRoYXQgdGhpcyBjb2RlIHdhcyBvcmlnaW5hbGx5IG1v
dGl2YXRlZCBieSB0aGUgb2xkIHRocmVhZGluZy1iYXNlZAorICAgICAgICBkZXNpZ24gYnV0IG5v
IGxvbmdlciBzZXJ2ZXJzIGFueSBwdXJwb3NlLgorCisgICAgICAgIE5vdGU6IElmIHdlIGFjdHVh
bGx5IGhhdmUgYSBwcm9ibGVtIHdpdGggdGhlIHRlc3QgaGFybmVzcyBnZXR0aW5nCisgICAgICAg
IHN0dWNrLCBidWlsZGJvdCB3aWxsIGtpbGwgdXMuCisKKyAgICAgICAgKiBTY3JpcHRzL3dlYmtp
dHB5L2xheW91dF90ZXN0cy9sYXlvdXRfcGFja2FnZS9tYW5hZ2VyLnB5OgorICAgICAgICAqIFNj
cmlwdHMvd2Via2l0cHkvbGF5b3V0X3Rlc3RzL2xheW91dF9wYWNrYWdlL21hbmFnZXJfd29ya2Vy
X2Jyb2tlci5weToKKyAgICAgICAgKiBTY3JpcHRzL3dlYmtpdHB5L2xheW91dF90ZXN0cy9sYXlv
dXRfcGFja2FnZS9tYW5hZ2VyX3dvcmtlcl9icm9rZXJfdW5pdHRlc3QucHk6CisKIDIwMTEtMDYt
MjcgIERpZWdvIEdvbnphbGV6ICA8ZGllZ29oY2dAd2Via2l0Lm9yZz4KIAogICAgICAgICBSZXZp
ZXdlZCBieSBBbnRvbmlvIEdvbWVzLgpJbmRleDogVG9vbHMvU2NyaXB0cy93ZWJraXRweS9sYXlv
dXRfdGVzdHMvbGF5b3V0X3BhY2thZ2UvbWFuYWdlci5weQo9PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSBUb29scy9T
Y3JpcHRzL3dlYmtpdHB5L2xheW91dF90ZXN0cy9sYXlvdXRfcGFja2FnZS9tYW5hZ2VyLnB5CShy
ZXZpc2lvbiA5MDE3NikKKysrIFRvb2xzL1NjcmlwdHMvd2Via2l0cHkvbGF5b3V0X3Rlc3RzL2xh
eW91dF9wYWNrYWdlL21hbmFnZXIucHkJKHdvcmtpbmcgY29weSkKQEAgLTY2OCwxOSArNjY4LDEy
IEBAIGNsYXNzIE1hbmFnZXI6CiAKICAgICAgICAgdHJ5OgogICAgICAgICAgICAgd2hpbGUgbm90
IHNlbGYuaXNfZG9uZSgpOgotICAgICAgICAgICAgICAgICMgV2UgbG9vcCB3aXRoIGEgdGltZW91
dCBpbiBvcmRlciB0byBiZSBhYmxlIHRvIGRldGVjdCB3ZWRnZWQgdGhyZWFkcy4KKyAgICAgICAg
ICAgICAgICAjIEZJWE1FOiBEbyB3ZSBuZWVkIHRvIHJ1biBpbiBhIGxvb3AgYW55bW9yZT8KICAg
ICAgICAgICAgICAgICBtYW5hZ2VyX2Nvbm5lY3Rpb24ucnVuX21lc3NhZ2VfbG9vcChkZWxheV9z
ZWNzPTEuMCkKIAotICAgICAgICAgICAgaWYgYW55KHdvcmtlcl9zdGF0ZS53ZWRnZWQgZm9yIHdv
cmtlcl9zdGF0ZSBpbiBzZWxmLl93b3JrZXJfc3RhdGVzLnZhbHVlcygpKToKLSAgICAgICAgICAg
ICAgICBfbG9nLmVycm9yKCcnKQotICAgICAgICAgICAgICAgIF9sb2cuZXJyb3IoJ1JlbWFpbmlu
ZyB3b3JrZXJzIGFyZSB3ZWRnZWQsIGJhaWxpbmcgb3V0LicpCi0gICAgICAgICAgICAgICAgX2xv
Zy5lcnJvcignJykKLSAgICAgICAgICAgIGVsc2U6Ci0gICAgICAgICAgICAgICAgX2xvZy5kZWJ1
ZygnTm8gd2VkZ2VkIHRocmVhZHMnKQotCiAgICAgICAgICAgICAjIE1ha2Ugc3VyZSBhbGwgb2Yg
dGhlIHdvcmtlcnMgaGF2ZSBzaHV0IGRvd24gKGlmIHBvc3NpYmxlKS4KICAgICAgICAgICAgIGZv
ciB3b3JrZXJfc3RhdGUgaW4gc2VsZi5fd29ya2VyX3N0YXRlcy52YWx1ZXMoKToKLSAgICAgICAg
ICAgICAgICBpZiBub3Qgd29ya2VyX3N0YXRlLndlZGdlZCBhbmQgd29ya2VyX3N0YXRlLndvcmtl
cl9jb25uZWN0aW9uLmlzX2FsaXZlKCk6CisgICAgICAgICAgICAgICAgaWYgd29ya2VyX3N0YXRl
Lndvcmtlcl9jb25uZWN0aW9uLmlzX2FsaXZlKCk6CiAgICAgICAgICAgICAgICAgICAgIF9sb2cu
ZGVidWcoJ1dhaXRpbmcgZm9yIHdvcmtlciAlZCB0byBleGl0JyAlIHdvcmtlcl9zdGF0ZS5udW1i
ZXIpCiAgICAgICAgICAgICAgICAgICAgIHdvcmtlcl9zdGF0ZS53b3JrZXJfY29ubmVjdGlvbi5q
b2luKDUuMCkKICAgICAgICAgICAgICAgICAgICAgaWYgd29ya2VyX3N0YXRlLndvcmtlcl9jb25u
ZWN0aW9uLmlzX2FsaXZlKCk6CkBAIC0xMzEzLDIwICsxMzA2LDkgQEAgY2xhc3MgTWFuYWdlcjoK
ICAgICAgICAgd29ya2VyX3N0YXRlcyA9IHNlbGYuX3dvcmtlcl9zdGF0ZXMudmFsdWVzKCkKICAg
ICAgICAgcmV0dXJuIHdvcmtlcl9zdGF0ZXMgYW5kIGFsbChzZWxmLl93b3JrZXJfaXNfZG9uZSh3
b3JrZXJfc3RhdGUpIGZvciB3b3JrZXJfc3RhdGUgaW4gd29ya2VyX3N0YXRlcykKIAorICAgICMg
RklYTUU6IElubGluZSB0aGlzIGZ1bmN0aW9uLgogICAgIGRlZiBfd29ya2VyX2lzX2RvbmUoc2Vs
Ziwgd29ya2VyX3N0YXRlKToKLSAgICAgICAgdCA9IHRpbWUudGltZSgpCi0gICAgICAgIGlmIHdv
cmtlcl9zdGF0ZS5kb25lIG9yIHdvcmtlcl9zdGF0ZS53ZWRnZWQ6Ci0gICAgICAgICAgICByZXR1
cm4gVHJ1ZQotCi0gICAgICAgIG5leHRfdGltZW91dCA9IHdvcmtlcl9zdGF0ZS5uZXh0X3RpbWVv
dXQKLSAgICAgICAgV0VER0VfUEFERElORyA9IDQwLjAKLSAgICAgICAgaWYgbmV4dF90aW1lb3V0
IGFuZCB0ID4gbmV4dF90aW1lb3V0ICsgV0VER0VfUEFERElORzoKLSAgICAgICAgICAgIF9sb2cu
ZXJyb3IoJycpCi0gICAgICAgICAgICB3b3JrZXJfc3RhdGUud29ya2VyX2Nvbm5lY3Rpb24ubG9n
X3dlZGdlZF93b3JrZXIod29ya2VyX3N0YXRlLmN1cnJlbnRfdGVzdF9uYW1lKQotICAgICAgICAg
ICAgX2xvZy5lcnJvcignJykKLSAgICAgICAgICAgIHdvcmtlcl9zdGF0ZS53ZWRnZWQgPSBUcnVl
Ci0gICAgICAgICAgICByZXR1cm4gVHJ1ZQotICAgICAgICByZXR1cm4gRmFsc2UKKyAgICAgICAg
cmV0dXJuIHdvcmtlcl9zdGF0ZS5kb25lCiAKICAgICBkZWYgY2FuY2VsX3dvcmtlcnMoc2VsZik6
CiAgICAgICAgIGZvciB3b3JrZXJfc3RhdGUgaW4gc2VsZi5fd29ya2VyX3N0YXRlcy52YWx1ZXMo
KToKQEAgLTEzNjEsMTAgKzEzNDMsNiBAQCBjbGFzcyBNYW5hZ2VyOgogICAgICAgICB3b3JrZXJf
c3RhdGUuc3RhdHNbJ3RvdGFsX3RpbWUnXSArPSBlbGFwc2VkX3RpbWUKICAgICAgICAgd29ya2Vy
X3N0YXRlLnN0YXRzWydudW1fdGVzdHMnXSArPSAxCiAKLSAgICAgICAgaWYgd29ya2VyX3N0YXRl
LndlZGdlZDoKLSAgICAgICAgICAgICMgVGhpcyBzaG91bGRuJ3QgaGFwcGVuIGlmIHdlIGhhdmUg
b3VyIHRpbWVvdXRzIHR1bmVkIHByb3Blcmx5LgotICAgICAgICAgICAgX2xvZy5lcnJvcigiJXMg
dW53ZWRnZWQiLCBzb3VyY2UpCi0KICAgICAgICAgc2VsZi5fYWxsX3Jlc3VsdHMuYXBwZW5kKHJl
c3VsdCkKICAgICAgICAgc2VsZi5fdXBkYXRlX3N1bW1hcnlfd2l0aF9yZXN1bHQoc2VsZi5fY3Vy
cmVudF9yZXN1bHRfc3VtbWFyeSwgcmVzdWx0KQogCkBAIC0xNDI5LDcgKzE0MDcsNiBAQCBjbGFz
cyBfV29ya2VyU3RhdGUob2JqZWN0KToKICAgICAgICAgc2VsZi5kb25lID0gRmFsc2UKICAgICAg
ICAgc2VsZi5jdXJyZW50X3Rlc3RfbmFtZSA9IE5vbmUKICAgICAgICAgc2VsZi5uZXh0X3RpbWVv
dXQgPSBOb25lCi0gICAgICAgIHNlbGYud2VkZ2VkID0gRmFsc2UKICAgICAgICAgc2VsZi5zdGF0
cyA9IHt9CiAgICAgICAgIHNlbGYuc3RhdHNbJ25hbWUnXSA9IHdvcmtlcl9jb25uZWN0aW9uLm5h
bWUKICAgICAgICAgc2VsZi5zdGF0c1snbnVtX3Rlc3RzJ10gPSAwCkluZGV4OiBUb29scy9TY3Jp
cHRzL3dlYmtpdHB5L2xheW91dF90ZXN0cy9sYXlvdXRfcGFja2FnZS9tYW5hZ2VyX3dvcmtlcl9i
cm9rZXIucHkKPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PQotLS0gVG9vbHMvU2NyaXB0cy93ZWJraXRweS9sYXlvdXRfdGVz
dHMvbGF5b3V0X3BhY2thZ2UvbWFuYWdlcl93b3JrZXJfYnJva2VyLnB5CShyZXZpc2lvbiA5MDE0
MSkKKysrIFRvb2xzL1NjcmlwdHMvd2Via2l0cHkvbGF5b3V0X3Rlc3RzL2xheW91dF9wYWNrYWdl
L21hbmFnZXJfd29ya2VyX2Jyb2tlci5weQkod29ya2luZyBjb3B5KQpAQCAtMjI2LDkgKzIyNiw2
IEBAIGNsYXNzIF9Xb3JrZXJDb25uZWN0aW9uKG1lc3NhZ2VfYnJva2VyLkIKICAgICBkZWYgam9p
bihzZWxmLCB0aW1lb3V0KToKICAgICAgICAgcmFpc2UgTm90SW1wbGVtZW50ZWRFcnJvcgogCi0g
ICAgZGVmIGxvZ193ZWRnZWRfd29ya2VyKHNlbGYsIHRlc3RfbmFtZSk6Ci0gICAgICAgIHJhaXNl
IE5vdEltcGxlbWVudGVkRXJyb3IKLQogICAgIGRlZiB5aWVsZF90b19icm9rZXIoc2VsZik6CiAg
ICAgICAgIHBhc3MKIApAQCAtMjQ5LDkgKzI0Niw2IEBAIGNsYXNzIF9JbmxpbmVXb3JrZXJDb25u
ZWN0aW9uKF9Xb3JrZXJDb24KICAgICBkZWYgam9pbihzZWxmLCB0aW1lb3V0KToKICAgICAgICAg
YXNzZXJ0IG5vdCBzZWxmLl9hbGl2ZQogCi0gICAgZGVmIGxvZ193ZWRnZWRfd29ya2VyKHNlbGYs
IHRlc3RfbmFtZSk6Ci0gICAgICAgIGFzc2VydCBGYWxzZSwgIl9JbmxpbmVXb3JrZXJDb25uZWN0
aW9uLmxvZ193ZWRnZWRfd29ya2VyKCkgY2FsbGVkIgotCiAgICAgZGVmIHJ1bihzZWxmKToKICAg
ICAgICAgc2VsZi5fYWxpdmUgPSBUcnVlCiAgICAgICAgIHNlbGYuX2NsaWVudC5ydW4oc2VsZi5f
cG9ydCkKQEAgLTI3MSw5ICsyNjUsNiBAQCBjbGFzcyBfVGhyZWFkKHRocmVhZGluZy5UaHJlYWQp
OgogICAgIGRlZiBjYW5jZWwoc2VsZik6CiAgICAgICAgIHJldHVybiBzZWxmLl9jbGllbnQuY2Fu
Y2VsKCkKIAotICAgIGRlZiBsb2dfd2VkZ2VkX3dvcmtlcihzZWxmLCB0ZXN0X25hbWUpOgotICAg
ICAgICBzdGFja191dGlscy5sb2dfdGhyZWFkX3N0YXRlKF9sb2cuZXJyb3IsIHNlbGYuX2NsaWVu
dC5uYW1lKCksIHNlbGYuaWRlbnQsICIgaXMgd2VkZ2VkIG9uIHRlc3QgJXMiICUgdGVzdF9uYW1l
KQotCiAgICAgZGVmIHJ1bihzZWxmKToKICAgICAgICAgIyBGSVhNRTogV2UgY2FuIHJlbW92ZSB0
aGlzIG9uY2UgZXZlcnlvbmUgaXMgb24gMi42LgogICAgICAgICBpZiBub3QgaGFzYXR0cihzZWxm
LCAnaWRlbnQnKToKQEAgLTI5Niw5ICsyODcsNiBAQCBjbGFzcyBfVGhyZWFkZWRXb3JrZXJDb25u
ZWN0aW9uKF9Xb3JrZXJDCiAgICAgZGVmIGpvaW4oc2VsZiwgdGltZW91dCk6CiAgICAgICAgIHJl
dHVybiBzZWxmLl90aHJlYWQuam9pbih0aW1lb3V0KQogCi0gICAgZGVmIGxvZ193ZWRnZWRfd29y
a2VyKHNlbGYsIHRlc3RfbmFtZSk6Ci0gICAgICAgIHJldHVybiBzZWxmLl90aHJlYWQubG9nX3dl
ZGdlZF93b3JrZXIodGVzdF9uYW1lKQotCiAgICAgZGVmIHN0YXJ0KHNlbGYpOgogICAgICAgICBz
ZWxmLl90aHJlYWQuc3RhcnQoKQogCkBAIC0zMTMsOSArMzAxLDYgQEAgaWYgbXVsdGlwcm9jZXNz
aW5nOgogICAgICAgICAgICAgc2VsZi5fb3B0aW9ucyA9IG9wdGlvbnMKICAgICAgICAgICAgIHNl
bGYuX2NsaWVudCA9IGNsaWVudAogCi0gICAgICAgIGRlZiBsb2dfd2VkZ2VkX3dvcmtlcihzZWxm
LCB0ZXN0X25hbWUpOgotICAgICAgICAgICAgX2xvZy5lcnJvcigiJXMgKHBpZCAlZCkgaXMgd2Vk
Z2VkIG9uIHRlc3QgJXMiICUgKHNlbGYubmFtZSwgc2VsZi5waWQsIHRlc3RfbmFtZSkpCi0KICAg
ICAgICAgZGVmIHJ1bihzZWxmKToKICAgICAgICAgICAgIG9wdGlvbnMgPSBzZWxmLl9vcHRpb25z
CiAgICAgICAgICAgICBwb3J0X29iaiA9IHBvcnQuZ2V0KHNlbGYuX3BsYXRmb3JtX25hbWUsIG9w
dGlvbnMpCkBAIC0zNDgsOCArMzMzLDUgQEAgY2xhc3MgX011bHRpUHJvY2Vzc1dvcmtlckNvbm5l
Y3Rpb24oX1dvcgogICAgIGRlZiBqb2luKHNlbGYsIHRpbWVvdXQpOgogICAgICAgICByZXR1cm4g
c2VsZi5fcHJvYy5qb2luKHRpbWVvdXQpCiAKLSAgICBkZWYgbG9nX3dlZGdlZF93b3JrZXIoc2Vs
ZiwgdGVzdF9uYW1lKToKLSAgICAgICAgcmV0dXJuIHNlbGYuX3Byb2MubG9nX3dlZGdlZF93b3Jr
ZXIodGVzdF9uYW1lKQotCiAgICAgZGVmIHN0YXJ0KHNlbGYpOgogICAgICAgICBzZWxmLl9wcm9j
LnN0YXJ0KCkKSW5kZXg6IFRvb2xzL1NjcmlwdHMvd2Via2l0cHkvbGF5b3V0X3Rlc3RzL2xheW91
dF9wYWNrYWdlL21hbmFnZXJfd29ya2VyX2Jyb2tlcl91bml0dGVzdC5weQo9PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0t
LSBUb29scy9TY3JpcHRzL3dlYmtpdHB5L2xheW91dF90ZXN0cy9sYXlvdXRfcGFja2FnZS9tYW5h
Z2VyX3dvcmtlcl9icm9rZXJfdW5pdHRlc3QucHkJKHJldmlzaW9uIDkwMTQxKQorKysgVG9vbHMv
U2NyaXB0cy93ZWJraXRweS9sYXlvdXRfdGVzdHMvbGF5b3V0X3BhY2thZ2UvbWFuYWdlcl93b3Jr
ZXJfYnJva2VyX3VuaXR0ZXN0LnB5CSh3b3JraW5nIGNvcHkpCkBAIC0xODksMjUgKzE4OSw2IEBA
IGNsYXNzIF9UZXN0c01peGluKG9iamVjdCk6CiAgICAgICAgIHNlbGYuYXNzZXJ0RXF1YWwoc2Vs
Zi5fYW5faW50LCAyKQogICAgICAgICBzZWxmLmFzc2VydEVxdWFsKHNlbGYuX2Ffc3RyLCAnaGks
IGV2ZXJ5Ym9keScpCiAKLSAgICBkZWYgdGVzdF9sb2dfd2VkZ2VkX3dvcmtlcihzZWxmKToKLSAg
ICAgICAgc3RhcnRpbmdfcXVldWUgPSBzZWxmLnF1ZXVlKCkKLSAgICAgICAgc3RvcHBpbmdfcXVl
dWUgPSBzZWxmLnF1ZXVlKCkKLSAgICAgICAgc2VsZi5tYWtlX2Jyb2tlcihzdGFydGluZ19xdWV1
ZSwgc3RvcHBpbmdfcXVldWUpCi0gICAgICAgIG9jID0gb3V0cHV0Y2FwdHVyZS5PdXRwdXRDYXB0
dXJlKCkKLSAgICAgICAgb2MuY2FwdHVyZV9vdXRwdXQoKQotICAgICAgICB0cnk6Ci0gICAgICAg
ICAgICB3b3JrZXIgPSBzZWxmLl9icm9rZXIuc3RhcnRfd29ya2VyKDApCi0gICAgICAgICAgICBz
dGFydGluZ19xdWV1ZS5nZXQoKQotICAgICAgICAgICAgd29ya2VyLmxvZ193ZWRnZWRfd29ya2Vy
KCd0ZXN0X25hbWUnKQotICAgICAgICAgICAgc3RvcHBpbmdfcXVldWUucHV0KCcnKQotICAgICAg
ICAgICAgc2VsZi5fYnJva2VyLnBvc3RfbWVzc2FnZSgnc3RvcCcpCi0gICAgICAgICAgICBzZWxm
Ll9icm9rZXIucnVuX21lc3NhZ2VfbG9vcCgpCi0gICAgICAgICAgICB3b3JrZXIuam9pbigwLjUp
Ci0gICAgICAgICAgICBzZWxmLmFzc2VydEZhbHNlKHdvcmtlci5pc19hbGl2ZSgpKQotICAgICAg
ICAgICAgc2VsZi5hc3NlcnRUcnVlKHNlbGYuaXNfZG9uZSgpKQotICAgICAgICBmaW5hbGx5Ogot
ICAgICAgICAgICAgb2MucmVzdG9yZV9vdXRwdXQoKQotCiAgICAgZGVmIHRlc3RfdW5rbm93bl9t
ZXNzYWdlKHNlbGYpOgogICAgICAgICBzZWxmLm1ha2VfYnJva2VyKCkKICAgICAgICAgd29ya2Vy
ID0gc2VsZi5fYnJva2VyLnN0YXJ0X3dvcmtlcigwKQpAQCAtMjIyLDE3ICsyMDMsNiBAQCBjbGFz
cyBfVGVzdHNNaXhpbihvYmplY3QpOgogICAgICAgICAgICAgIlRlc3RXb3JrZXIvMDogcmVjZWl2
ZWQgbWVzc2FnZSAndW5rbm93bicgaXQgY291bGRuJ3QgaGFuZGxlIikKIAogCi1jbGFzcyBJbmxp
bmVCcm9rZXJUZXN0cyhfVGVzdHNNaXhpbiwgdW5pdHRlc3QuVGVzdENhc2UpOgotICAgIGRlZiBz
ZXRVcChzZWxmKToKLSAgICAgICAgX1Rlc3RzTWl4aW4uc2V0VXAoc2VsZikKLSAgICAgICAgc2Vs
Zi5fd29ya2VyX21vZGVsID0gJ2lubGluZScKLQotICAgIGRlZiB0ZXN0X2xvZ193ZWRnZWRfd29y
a2VyKHNlbGYpOgotICAgICAgICBzZWxmLm1ha2VfYnJva2VyKCkKLSAgICAgICAgd29ya2VyID0g
c2VsZi5fYnJva2VyLnN0YXJ0X3dvcmtlcigwKQotICAgICAgICBzZWxmLmFzc2VydFJhaXNlcyhB
c3NlcnRpb25FcnJvciwgd29ya2VyLmxvZ193ZWRnZWRfd29ya2VyLCBOb25lKQotCi0KICMgRklY
TUU6IGh0dHBzOi8vYnVncy53ZWJraXQub3JnL3Nob3dfYnVnLmNnaT9pZD01NDUyMC4KIGlmIG11
bHRpcHJvY2Vzc2luZyBhbmQgc3lzLnBsYXRmb3JtIG5vdCBpbiAoJ2N5Z3dpbicsICd3aW4zMicp
OgogCkBAIC0yODIsNyArMjUyLDYgQEAgY2xhc3MgSW50ZXJmYWNlVGVzdCh1bml0dGVzdC5UZXN0
Q2FzZSk6CiAgICAgICAgIHNlbGYuYXNzZXJ0UmFpc2VzKE5vdEltcGxlbWVudGVkRXJyb3IsIG9i
ai5jYW5jZWwpCiAgICAgICAgIHNlbGYuYXNzZXJ0UmFpc2VzKE5vdEltcGxlbWVudGVkRXJyb3Is
IG9iai5pc19hbGl2ZSkKICAgICAgICAgc2VsZi5hc3NlcnRSYWlzZXMoTm90SW1wbGVtZW50ZWRF
cnJvciwgb2JqLmpvaW4sIE5vbmUpCi0gICAgICAgIHNlbGYuYXNzZXJ0UmFpc2VzKE5vdEltcGxl
bWVudGVkRXJyb3IsIG9iai5sb2dfd2VkZ2VkX3dvcmtlciwgTm9uZSkKIAogCiBpZiBfX25hbWVf
XyA9PSAnX19tYWluX18nOgo=
</data>
<flag name="review"
          id="93855"
          type_id="1"
          status="+"
          setter="eric"
    />
          </attachment>
      

    </bug>

</bugzilla>