high performance ipfw bridge

high performance ipfw bridge

Post by Devon Blea » Thu, 18 Apr 2002 00:56:57



hello,

i'm looking at putting an ipfw bridge with a pretty minimal ruleset
(figure 100 rules at the very most, i don't think cpu will be a problem)
on an uncapped 100Mbps connection to the internet.

the firewall has 4 fxp interfaces (2 dual-port cards).  now my questions
are: which interfaces do i set up the bridge on?  am i even going to be
running into issues with i/o bottlenecks or interrupts?

here's the relevant dmesg output:
(to summarize it quickly: fxp0 and fxp3 are on int 10, fxp1 and fxp2 are
on int 5, and each dual-port nic ((fxp0,fxp1),(fxp2,fxp3)) is on a
different pci-pci bridge).

-----begin dmesg output-----
pcib0: <Intel 82443GX host to PCI bridge> on motherboard
pci0: <PCI bus> on pcib0
pcib2: <Intel 82443GX (440 GX) PCI-PCI (AGP) bridge> at device 1.0 on
pci0
pci1: <PCI bus> on pcib2
pci1: <Chips & Technologies 69000 SVGA controller> at 0.0 irq 0
isab0: <Intel 82371AB PCI to ISA bridge> at device 7.0 on pci0
isa0: <ISA bus> on isab0
atapci0: <Intel PIIX4 ATA33 controller> port 0xffa0-0xffaf at device 7.1
on pci0
ata0: at 0x1f0 irq 14 on atapci0
ata1: at 0x170 irq 15 on atapci0
pci0: <Intel 82371AB/EB (PIIX4) USB controller> at 7.2
chip1: <Intel 82371AB Power management controller> port 0x440-0x44f at
device 7.3 on pci0
fxp0: <Intel Pro 10/100B/100+ Ethernet> port 0xef00-0xef3f mem
0xfea00000-0xfeafffff,0xfebff000-0xfebfffff irq 10 at device 17.0 on
pci0
fxp0: Ethernet address 00:d0:a8:00:a2:49
inphy0: <i82555 10/100 media interface> on miibus0
inphy0:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
fxp1: <Intel Pro 10/100B/100+ Ethernet> port 0xee80-0xeebf mem
0xfe800000-0xfe8fffff,0xfebfe000-0xfebfefff irq 5 at device 18.0 on pci0
fxp1: Ethernet address 00:d0:a8:00:a2:4a
inphy1: <i82555 10/100 media interface> on miibus1
inphy1:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
pcib3: <DEC 21152 PCI-PCI bridge> at device 20.0 on pci0
pci2: <PCI bus> on pcib3
pcib4: <DEC 21154 PCI-PCI bridge> at device 14.0 on pci2
pci3: <PCI bus> on pcib4
fxp2: <Intel Pro 10/100B/100+ Ethernet> port 0xcf00-0xcf3f mem
0xfe1c0000-0xfe1dffff,0xfe1ff000-0xfe1fffff irq 5 at device 4.0 on pci3
fxp2: Ethernet address 00:02:b3:5e:6b:e3
inphy2: <i82555 10/100 media interface> on miibus2
inphy2:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
fxp3: <Intel Pro 10/100B/100+ Ethernet> port 0xce80-0xcebf mem
0xfe1a0000-0xfe1bffff,0xfe1fe000-0xfe1fefff irq 10 at device 5.0 on pci3
fxp3: Ethernet address 00:02:b3:5e:6b:e4
inphy3: <i82555 10/100 media interface> on miibus3
inphy3:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
-----end dmesg output-----

right now, fxp0 is set up as the management interface, so i'd like to
keep it that way.  i'm thinking fxp1 as the external interface, but
can't decide on fxp2 or fxp3 as the internal interface.

anything else i'm missing?

any help or advice is appreciated.  contact me off-list if you like.

devon

 
 
 

high performance ipfw bridge

Post by WhoCarezWhoIA » Thu, 18 Apr 2002 12:09:49



Quote:> hello,

> i'm looking at putting an ipfw bridge with a pretty minimal ruleset
> (figure 100 rules at the very most, i don't think cpu will be a problem)
> on an uncapped 100Mbps connection to the internet.

100 Mbps that's pretty impressive, considering a T3 is 1.5444Mbps
what do you have a fiber line?
Quote:

> the firewall has 4 fxp interfaces (2 dual-port cards).  now my questions
> are: which interfaces do i set up the bridge on?  am i even going to be
> running into issues with i/o bottlenecks or interrupts?

> here's the relevant dmesg output:
> (to summarize it quickly: fxp0 and fxp3 are on int 10, fxp1 and fxp2 are
> on int 5, and each dual-port nic ((fxp0,fxp1),(fxp2,fxp3)) is on a
> different pci-pci bridge).

> -----begin dmesg output-----
> pcib0: <Intel 82443GX host to PCI bridge> on motherboard
> pci0: <PCI bus> on pcib0
> pcib2: <Intel 82443GX (440 GX) PCI-PCI (AGP) bridge> at device 1.0 on
> pci0
> pci1: <PCI bus> on pcib2
> pci1: <Chips & Technologies 69000 SVGA controller> at 0.0 irq 0
> isab0: <Intel 82371AB PCI to ISA bridge> at device 7.0 on pci0
> isa0: <ISA bus> on isab0
> atapci0: <Intel PIIX4 ATA33 controller> port 0xffa0-0xffaf at device 7.1
> on pci0
> ata0: at 0x1f0 irq 14 on atapci0
> ata1: at 0x170 irq 15 on atapci0
> pci0: <Intel 82371AB/EB (PIIX4) USB controller> at 7.2
> chip1: <Intel 82371AB Power management controller> port 0x440-0x44f at
> device 7.3 on pci0
> fxp0: <Intel Pro 10/100B/100+ Ethernet> port 0xef00-0xef3f mem
> 0xfea00000-0xfeafffff,0xfebff000-0xfebfffff irq 10 at device 17.0 on
> pci0
> fxp0: Ethernet address 00:d0:a8:00:a2:49
> inphy0: <i82555 10/100 media interface> on miibus0
> inphy0:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
> fxp1: <Intel Pro 10/100B/100+ Ethernet> port 0xee80-0xeebf mem
> 0xfe800000-0xfe8fffff,0xfebfe000-0xfebfefff irq 5 at device 18.0 on pci0
> fxp1: Ethernet address 00:d0:a8:00:a2:4a
> inphy1: <i82555 10/100 media interface> on miibus1
> inphy1:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
> pcib3: <DEC 21152 PCI-PCI bridge> at device 20.0 on pci0
> pci2: <PCI bus> on pcib3
> pcib4: <DEC 21154 PCI-PCI bridge> at device 14.0 on pci2
> pci3: <PCI bus> on pcib4
> fxp2: <Intel Pro 10/100B/100+ Ethernet> port 0xcf00-0xcf3f mem
> 0xfe1c0000-0xfe1dffff,0xfe1ff000-0xfe1fffff irq 5 at device 4.0 on pci3
> fxp2: Ethernet address 00:02:b3:5e:6b:e3
> inphy2: <i82555 10/100 media interface> on miibus2
> inphy2:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
> fxp3: <Intel Pro 10/100B/100+ Ethernet> port 0xce80-0xcebf mem
> 0xfe1a0000-0xfe1bffff,0xfe1fe000-0xfe1fefff irq 10 at device 5.0 on pci3
> fxp3: Ethernet address 00:02:b3:5e:6b:e4
> inphy3: <i82555 10/100 media interface> on miibus3
> inphy3:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
> -----end dmesg output-----

> right now, fxp0 is set up as the management interface, so i'd like to
> keep it that way.  i'm thinking fxp1 as the external interface, but
> can't decide on fxp2 or fxp3 as the internal interface.

> anything else i'm missing?

> any help or advice is appreciated.  contact me off-list if you like.

> devon


 
 
 

high performance ipfw bridge

Post by Devon Blea » Thu, 18 Apr 2002 23:51:48



Quote:

> 100 Mbps that's pretty impressive, considering a T3 is 1.5444Mbps
> what do you have a fiber line?

actually, a full t3 is 45Mbps.  a full t1 is 1.544Mbps.  this machine is
racked in a datacenter with several fiber connections.

devon

 
 
 

high performance ipfw bridge

Post by WhoCarezWhoIA » Fri, 19 Apr 2002 04:44:18





> > 100 Mbps that's pretty impressive, considering a T3 is 1.5444Mbps
> > what do you have a fiber line?

> actually, a full t3 is 45Mbps.  a full t1 is 1.544Mbps.  this machine is
> racked in a datacenter with several fiber connections.

> devon

damn it i always get those 2 mixed up for some reason....
i will never understand why i keep thinking a t1=45mbps and a t3=1.5444 ....

oh well,

 
 
 

high performance ipfw bridge

Post by Bill Vermilli » Fri, 19 Apr 2002 05:12:23








>> > 100 Mbps that's pretty impressive, considering a T3 is 1.5444Mbps
>> > what do you have a fiber line?
>> actually, a full t3 is 45Mbps. a full t1 is 1.544Mbps. this
>> machine is racked in a datacenter with several fiber connections.
>damn it i always get those 2 mixed up for some reason.... i will
>never understand why i keep thinking a t1=45mbps and a t3=1.5444
>....

A T1 is 24 channels.  They can be 24 56K channels with 8K overhead
each or in a PRI mode 23 64K channels with 1 channel for the
overhead. [do the math and you will see this is more efficient].

A T3 consists of 30 T1s.   And the "T" means they are channelized.
Most data connections are now specified as DS1 or DS3 - so they are
full width without the channelization overhead.

Everything is getting faster anymore.

--

 
 
 

high performance ipfw bridge

Post by Michael Sierchi » Fri, 19 Apr 2002 05:19:02



> A T3 consists of 30 T1s.   And the "T" means they are channelized.
> Most data connections are now specified as DS1 or DS3 - so they are
> full width without the channelization overhead.

> Everything is getting faster anymore.

Except the goddamn last mile.  We'll have 10Gb ethernet at home before
DLSAM equipment can offer any better than 384kb upload (on average).
 
 
 

high performance ipfw bridge

Post by jp » Fri, 19 Apr 2002 05:27:30


On Wed, 17 Apr 2002 13:19:02 -0700,
[snip: datapipes! faster! cheer!]

Quote:> Except the goddamn last mile.  We'll have 10Gb ethernet at home before
> DLSAM equipment can offer any better than 384kb upload (on average).

Well that depends on telco dimensioning their DSLAMs, I've seen setups
that can do about 512K/1M (officially, up to 8M down when avail), but
DSLAMs ain't exactly cheap. Oh and the last mile is 100yo copper.
Not exactly cat5e, that.

Still, I'll take the 10Gb glass and even take VoIP if I can't get ATM
over Ye Olde Copper on a DSLAM any day.

--
  j p d (at) d s b (dot) t u d e l f t (dot) n l .

 
 
 

high performance ipfw bridge

Post by Bill Vermilli » Fri, 19 Apr 2002 08:42:25





>> A T3 consists of 30 T1s.   And the "T" means they are channelized.
>> Most data connections are now specified as DS1 or DS3 - so they are
>> full width without the channelization overhead.
>> Everything is getting faster anymore.
>Except the goddamn last mile.  We'll have 10Gb ethernet at home before
>DLSAM equipment can offer any better than 384kb upload (on average).

Upload speed are differnt from download speeds because of the
design of DSL.

The problems are caused by what is falled FEXT and NEXT.

FEXT - far end cross-talk and  NEXT - near end cross-talk.

You can take a large cable and push data at high speed OUT of the
system as you are driving each pair with set of drivers.  The
levels are sufficient that you don't get cross talk at the sending
end - NEXT - that cause problems.  At the far end the wires split
out to just few and the high speed is no problem.

But going the other way if you have high speed send from the home
at the CO you have a large mass of cable with a LOT of high speed
signals - this is the FEXT as seen from the home side - and these
will interfere with each other.

This is part of the DSL design.

There is SDSL which really boils down to two-wire T1.  Think of it
as 1/2 of a T1 with DSL encoding as opposed to T1 encoding.

There is also high-speed DSL.  For $300/month I can get
DSL with 3MBIT bandwidth both ways.   At $99 I can move from
the 512/128 to 1Mb/256.

There is more talk of FTTC - Fibre To The Curb - but as long as we
have metal on standard lines we are going to see things like this.

To get higher speeds the line pairs have to be chosen carefully and
it's best for the telco to only have 1 or 2 high speed incoming in
one bundle.  

Certain laws of physic pertaining to frequency and levels in metal
wires that are close together just can be worked around.
More and more metal is disappearing even for local metro transport.

Bill

--

 
 
 

high performance ipfw bridge

Post by Bill Vermilli » Fri, 19 Apr 2002 08:57:33




>On Wed, 17 Apr 2002 13:19:02 -0700,

>[snip: datapipes! faster! cheer!]
>> Except the goddamn last mile.  We'll have 10Gb ethernet at home before
>> DLSAM equipment can offer any better than 384kb upload (on average).
>Well that depends on telco dimensioning their DSLAMs, I've seen setups
>that can do about 512K/1M (officially, up to 8M down when avail), but
>DSLAMs ain't exactly cheap. Oh and the last mile is 100yo copper.
>Not exactly cat5e, that.
>Still, I'll take the 10Gb glass and even take VoIP if I can't get ATM
>over Ye Olde Copper on a DSLAM any day.

Well my DSL is provisioned so I'm running PPoA and not the typical
PPoE for my DSL modem.  It really is more efficient instead of
encapsulating ethernet in ATM to transport to the provider and then
back to Ethernet.  

--

 
 
 

high performance ipfw bridge

Post by Mats Lofkvis » Fri, 19 Apr 2002 18:07:33





[snip]

> >Except the goddamn last mile.  We'll have 10Gb ethernet at home before
> >DLSAM equipment can offer any better than 384kb upload (on average).

[snip]

> There is also high-speed DSL.  For $300/month I can get
> DSL with 3MBIT bandwidth both ways.   At $99 I can move from
> the 512/128 to 1Mb/256.

Got 2.4Mbit/768kbit ADSL for ~$30/month here (in Sweden).

      _
Mats Lofkvist

 
 
 

high performance ipfw bridge

Post by jp » Fri, 19 Apr 2002 18:39:39



[snip]

Quote:

> Got 2.4Mbit/768kbit ADSL for ~$30/month here (in Sweden).

Too bad your *ic beverages are so darn expensive, else I'd
think of moving. :-)

--
  j p d (at) d s b (dot) t u d e l f t (dot) n l .

 
 
 

high performance ipfw bridge

Post by Bill Vermilli » Fri, 19 Apr 2002 23:27:23






>[snip]
>> >Except the goddamn last mile.  We'll have 10Gb ethernet at home before
>> >DLSAM equipment can offer any better than 384kb upload (on average).

>[snip]

>> There is also high-speed DSL.  For $300/month I can get
>> DSL with 3MBIT bandwidth both ways.   At $99 I can move from
>> the 512/128 to 1Mb/256.
>Got 2.4Mbit/768kbit ADSL for ~$30/month here (in Sweden).

Lucky you on the price - but I can't take the cold anymore - I left
mountains, 20 feet of snow and cold years ago.  Our temperature
here in Florida are more akin to Northern Africa.  But other than
that it sounds good :-)

--

 
 
 

high performance ipfw bridge

Post by Martin Hepwort » Sat, 20 Apr 2002 00:20:55







>>[snip]

>>>>Except the goddamn last mile.  We'll have 10Gb ethernet at home before
>>>>DLSAM equipment can offer any better than 384kb upload (on average).

>>[snip]

>>>There is also high-speed DSL.  For $300/month I can get
>>>DSL with 3MBIT bandwidth both ways.   At $99 I can move from
>>>the 512/128 to 1Mb/256.

>>Got 2.4Mbit/768kbit ADSL for ~$30/month here (in Sweden).

> Lucky you on the price - but I can't take the cold anymore - I left
> mountains, 20 feet of snow and cold years ago.  Our temperature
> here in Florida are more akin to Northern Africa.  But other than
> that it sounds good :-)

Sounds like a good choice - warm outside or RAW BANDWIDTH CHEAPLY.

hmm bandwidth good - wool jumpers good.

Unfortunately I couldn't stand the high taxes In Sweden to I think I'll
stay in the UK

:-)

--
Martin Hepworth
Senior Systems Administrator
Solid State Logic Ltd
+44 (0)1865 842300

 
 
 

high performance ipfw bridge

Post by Steve O'Hara-Smit » Fri, 19 Apr 2002 15:38:28


On Wed, 17 Apr 2002 23:42:25 GMT

BV> Certain laws of physic pertaining to frequency and levels in metal
BV> wires that are close together just can be worked around.

        Please tell us there's an "'t" missing in the line above.

        Nice post BTW.

--
C:>WIN                                          |     Directable Mirrors
The computer obeys and wins.                    |A Better Way To Focus The Sun
You lose and Bill collects.                     |  licenses available - see:
                                                |   http://www.sohara.org/

 
 
 

high performance ipfw bridge

Post by Bill Vermilli » Sun, 21 Apr 2002 01:42:23




>On Wed, 17 Apr 2002 23:42:25 GMT

>BV> Certain laws of physic pertaining to frequency and levels in metal
>BV> wires that are close together just can be worked around.
>    Please tell us there's an "'t" missing in the line above.

Missing 't? Yup - CAN'T be worked around. I can see asls see
a missing s. There is quite a differernce between 'physic' vs
'physics' :-)

Quote:>    Nice post BTW.

Learned a lot of this working with DSL on the provider side and
working with telco people.

--

 
 
 

1. IPFW & BRIDGE

Y have a freeBSD-box with two network card; my kernel is compiled with:
options IPFIREWALL
options BRIDGE
options DUMMYNET (for traffic shaper)

In sysctl.conf :
net.link.ether.bridge=1
net.link.ether.bridge_ipfw=1

This configuration not work; Why?

Tanks

Marco G.

2. Changing Partition Sizes

3. ipfw, NAT, and bridging

4. replacing /bin/ksh with at&t's ksh93 - problems?

5. ipfw, bridge and divert?

6. COLA FAQ 4 of 7 05-Oct-2002

7. ipfw "not bridged"

8. Compiling new version of audio driver

9. Bridging and ipfw

10. Performance of IPFW?

11. IPFW Performance Issues?

12. OpenBSD Bridge with ipfilter - slow performance

13. high performance systems