I will dump all preconceived notions and attack my problem again.
Tonight was another bad night, as about 1/2 of the webages I tried to
read ended up in "The remote host failed to send the reply within the
timeout period". Indeed, it seems most of my web reading is thru the
google cache copies, as connecting to the real page gives, e.g., with
Looking up oriented.org
Making HTTP connection to oriented.org
Sending HTTP request.
HTTP request sent; waiting for response.
Alert!: Unexpected network read error; connection aborted.
Yes, I've used tcpdump, etc... anyway, over these months the feeling I
get is that there are various sites that shine thru, like google, no
matter whatever the mists that enshroud. Then there are other weaker
sites, e.g., my future website www.jidanni.org or my domain name
manager, http://manage.opensrs.net/ that I can hardly ever get to see
the page of, even though http://opensrs.net/ always comes thru.
Am I the first person on the wrong side of the pacific ocean to want
to see all those US sites?
Anyways, every 15 minutes or so the clouds part for a while and I can
get to more sites, but worryingly not say, my own website where I have
a tiny test page, nothing slow.
As you can see in
I thought I had one of the problems like the others, but now I think
I'm just hurting folks by making them wonder if the problem is their
system --- but how could it be if about 1/2 the net is down for me.
Today I couldn't read pages on
were no problem browsing.
Are the bad sites just too far away or something? What do the bad
sites have in common that make them become an error message within a
second or two of me clicking on them? Yes, I've tinkered with
/etc/ppp/* and /proc/***/*tcp*
It doesn't seem anybody else here in Taiwan has this problem, however,
they hardly use linux nor browse pages in faraway America. I sure
hope the faraway part isn't the problem....
Could it be that I'm using Mandrake 7.2, and one is supposed to
upgrade often or else one's system becomes stale and can't keep up
with the net of the 00's or whatever? As you see, having this problem
for months on and off is having its toll on my thinking.
No, ftp isn't affected. Therefore starting in April I will be using
FTP to update my new website, which I must admit, I can't really
browse very often, oh, maybe once a week if it is a good day.
So, what parameter do I tinker to allow the "lesser sites" not to
timeout, to give them more than a second which is usually the time it
takes for the failure to be announced on my screen?
No, all the /proc/*tcp* stuff is way over one second, so that cant be
I know, maybe this is like the time one of you guys in the USA tried
to surf some sites in Albania or whatever: 'worked fine when I was
there, now they all time out from abroad'.
OK, there is the long wait kind of timeout, when you click and nothing
happens for a long time, and then eventually the page appears, or you
get a timeout message... no, I don't have this kind of problem... my
problem is I click and boom, within one second I get the failure
message. Indeed, my proxy [when turned on] says:
The WWWOFFLE server was unable to complete the request for this URL
due to a problem with downloading this page. This can be because
the remote server could not send any reply within the socket
timeout period or a reply was interrupted by a time longer than the
socket timeout period with no data.
Anyways, it seems clear to me that my problem is in the HTTP layer or
just below... nothing deep [like all the newsgroups I posted to :-)]
Could it be a DNS timeout upon second query ... hmmm, back to tcpdump...