Linux-2.4.14-pre8..

Linux-2.4.14-pre8..

Post by j.. » Tue, 06 Nov 2001 03:40:13




> Ok, this is hopefully the last 2.4.14 pre-kernel, and per popular demand I
> hope to avoid any major changes between "last pre" and final. So give it a
> whirl, and don't whine if the final doesn't act in a way you like it to.

> Special thanks to Andrea - we spent too much time tracking down a subtle
> sigsegv problem, but we got it in the end.

> Also, I was able to reproduce the total lack of interactivity that the
> google people complained about, and while I didn't run the google tests
> themselves, at least the load I had is fixed.

> But most of the changes are actually trying to catch up with some of the
> emails that I ignored while working on the VM issues. I hope the VM is
> good to go, along with a real 2.4.14.

The results for 2.4.14-pre8 of my kernel compile tests are following:

                    j25       j50       j75      j100                                    

2.4.13-pre5aa1:   5:02.63   5:09.18   5:26.27   5:34.36                                  
2.4.13-pre5aa1:   4:58.80   5:12.30   5:26.23   5:32.14                                  
2.4.13-pre5aa1:   4:57.66   5:11.29   5:45.90   6:03.53                                  
2.4.13-pre5aa1:   4:58.39   5:13.10   5:29.32   5:44.49                                  
2.4.13-pre5aa1:   4:57.93   5:09.76   5:24.76   5:26.79                                  

2.4.14-pre6:      4:58.88   5:16.68   5:45.93   7:16.56                                  
2.4.14-pre6:      4:55.72   5:34.65   5:57.94   6:50.58                                  
2.4.14-pre6:      4:59.46   5:16.88   6:25.83   6:51.43                                  
2.4.14-pre6:      4:56.38   5:18.88   6:15.97   6:31.72                                  
2.4.14-pre6:      4:55.79   5:17.47   6:00.23   6:44.85                                  

2.4.14-pre7:      4:56.39   5:22.84   6:09.05   9:56.59                                  
2.4.14-pre7:      4:56.55   5:25.15   7:01.37   7:03.74                                  
2.4.14-pre7:      4:59.44   5:15.10   6:06.78   12:51.39*                                
2.4.14-pre7:      4:58.07   5:30.55   6:15.37      *                                      
2.4.14-pre7:      4:58.17   5:26.80   6:41.44      *

2.4.14-pre8:      4:57.14   5:10.72   5:54.42   6:37.39
2.4.14-pre8:      4:59.57   5:11.63   6:34.97   11:23.77
2.4.14-pre8:      4:58.18   5:16.67   6:07.88   6:32.38
2.4.14-pre8:      4:56.23   5:16.57   6:15.01   7:02.45
2.4.14-pre8:      4:58.53   5:19.98   5:39.09   12:08.69

Is there anything else I can measure during the kernel compiles?
Are the numbers for >= -pre6 slower because of measures taken to
increase the "interactivity" / responsivness of the kernel?

The part that looks most suspicious to me is that the results
for make -j100 vary so much ...

Regards,

   Jogi

These are the additional infos from time -v for make -j100:

User time (seconds): 261.63
System time (seconds): 25.89
Percent of CPU this job got: 72%
Elapsed (wall clock) time (h:mm:ss or m:ss): 6:37.39
Major (requiring I/O) page faults: 937515
Minor (reclaiming a frame) page faults: 1059195

User time (seconds): 264.69
System time (seconds): 28.47
Percent of CPU this job got: 42%
Elapsed (wall clock) time (h:mm:ss or m:ss): 11:23.77
Major (requiring I/O) page faults: 999211
Minor (reclaiming a frame) page faults: 1101511

User time (seconds): 262.22
System time (seconds): 25.11
Percent of CPU this job got: 73%
Elapsed (wall clock) time (h:mm:ss or m:ss): 6:32.38
Major (requiring I/O) page faults: 935552
Minor (reclaiming a frame) page faults: 1064976

User time (seconds): 262.22
System time (seconds): 26.77
Percent of CPU this job got: 68%
Elapsed (wall clock) time (h:mm:ss or m:ss): 7:02.45
Major (requiring I/O) page faults: 960273
Minor (reclaiming a frame) page faults: 1075637

User time (seconds): 263.20
System time (seconds): 35.87
Percent of CPU this job got: 41%
Elapsed (wall clock) time (h:mm:ss or m:ss): 12:08.69
Major (requiring I/O) page faults: 953770
Minor (reclaiming a frame) page faults: 1105582

--

Well, yeah ... I suppose there's no point in getting greedy, is there?

    << Calvin & Hobbes >>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

Linux-2.4.14-pre8..

Post by Linus Torvald » Tue, 06 Nov 2001 04:00:18



Quote:

> Is there anything else I can measure during the kernel compiles?
> Are the numbers for >= -pre6 slower because of measures taken to
> increase the "interactivity" / responsivness of the kernel?

No, it's something else, possibly some of the sharing braindamage.

Quote:> The part that looks most suspicious to me is that the results
> for make -j100 vary so much ...

Indeed. I don't like that part at all. That implies that some part of the
code is unstable. One thing that Andrea points out is that the current VM
scanning is rather unfair to active pages - if we get lots of active pages
(for whatever reason), that will defeat some of the page-out aging code.

Also, Andrea also suspects that when we de-activate a page in
refill_inactive, we should activate it again if somebody touches it, and
not make it go through the whole "activate on second reference" rigamole
again. What does this patch do to the pre8 behaviour?

(The first chunk just says that we _can_ unmap active pages: it's up to
refill_inactive to perhaps de-activate them and free them on demand. The
second chunk says that when refill_inactive() moves a page to the inactive
list, it's already been "touched once", so another access will activate it
again).

                Linus

----
diff -u --recursive pre8/linux/mm/vmscan.c linux/mm/vmscan.c
--- pre8/linux/mm/vmscan.c      Sun Nov  4 09:41:04 2001

                return 0;
        }

+#if 0
        /* Don't bother unmapping pages that are active */
        if (PageActive(page))
                return 0;
+#endif

        /* Don't bother replenishing zones not under pressure.. */

                del_page_from_active_list(page);
                add_page_to_inactive_list(page);
+               SetPageReferenced(page);
        }
        spin_unlock(&pagemap_lru_lock);
 }

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

Linux-2.4.14-pre8..

Post by j.. » Tue, 06 Nov 2001 06:10:15


Hello,

with the complete patch (s.b.) the kernel did kill processes while running
make -j100. So I tried only the second part of the patch (the SetPage-part)
and here are the results (this time only the make -j100 part:

2.4.14-pre8vmscan2:   6:12.06
2.4.14-pre8vmscan2:   6:41.43
2.4.14-pre8vmscan2:   6:53.22
2.4.14-pre8vmscan2:   7:12.03
2.4.14-pre8vmscan2:   5:49.82

I will run the whole test tonight and I will keep you posted. If you
want me to try a different patch, just let me know.

> (The first chunk just says that we _can_ unmap active pages: it's up to
> refill_inactive to perhaps de-activate them and free them on demand. The
> second chunk says that when refill_inactive() moves a page to the inactive
> list, it's already been "touched once", so another access will activate it
> again).

>            Linus

> ----
> diff -u --recursive pre8/linux/mm/vmscan.c linux/mm/vmscan.c
> --- pre8/linux/mm/vmscan.c Sun Nov  4 09:41:04 2001
> +++ linux/mm/vmscan.c      Sun Nov  4 10:41:59 2001

>            return 0;
>    }

> +#if 0
>    /* Don't bother unmapping pages that are active */
>    if (PageActive(page))
>            return 0;
> +#endif

>    /* Don't bother replenishing zones not under pressure.. */
>    if (!memclass(page->zone, classzone))

>            del_page_from_active_list(page);
>            add_page_to_inactive_list(page);
> +          SetPageReferenced(page);
>    }
>    spin_unlock(&pagemap_lru_lock);
>  }

--

Well, yeah ... I suppose there's no point in getting greedy, is there?

    << Calvin & Hobbes >>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

Linux-2.4.14-pre8..

Post by Linus Torvald » Tue, 06 Nov 2001 10:00:12



Quote:

> with the complete patch (s.b.) the kernel did kill processes while running
> make -j100. So I tried only the second part of the patch (the SetPage-part)
> and here are the results (this time only the make -j100 part:

> 2.4.14-pre8vmscan2:   6:12.06
> 2.4.14-pre8vmscan2:   6:41.43
> 2.4.14-pre8vmscan2:   6:53.22
> 2.4.14-pre8vmscan2:   7:12.03
> 2.4.14-pre8vmscan2:   5:49.82

Good. So at least that one seems to explain (and fix) the "non-repeatable
performance".  That's the one that worried me the most in your load. It
still has some variation, but it's a _lot_ better. Thanks,

                Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

1. loopback device problem and unrequested modules trying to load: linux-2.4.14

I experienced dependency problems with the 2.2.14 kernel when I tried to
compile in loopback block device, as others have reported. In addition to
this,modprobe was looking for modules which I had not had installed:
sound-service0-0 and sound-slot (may not have the module names verbatim).
I have sound support compiled in to the kernel; there should be no sound
modules loading, and I didn't have any loading before compiling this
kernel. when I rebooted with my oldkernel, those modules were no longer
trying to load. I should clarify that the modules weren't actually
loading; modprobe was trying to find them and couldn't. I double-checked
my config to make sure I hadn't requested any modules for sound by
accident.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

2. New Error with Intel EtherExpress 16

3. linux-2.4.14 vs. drivers/block/loop.c

4. Dial in modems and getty/uugetty ???

5. x86-64-2.4.14-1 linux kernel released

6. Redhat 7.3: Latest Kernel Update Breaks iptables "--match owner"?

7. Mounting Cdroms in SunOS 4.14 - error unable to mount hsfs

8. FIVE GOOD REASONS WHY IBMS ARE BETTER THAN Matrox Milleniums

9. How to install PPP access via modem under AIX 4.14

10. DB_RENUMBER in berkeley DB v2.4.14 broken?

11. IPTABLES-mangles-2.4.14-kernel

12. zdump from Sun OS version 4.13 or 4.14.

13. Vulnerabilities and solutions of AIX 4.14