Performance Problems on UBR10k | docsis.org

You are here

Performance Problems on UBR10k

hi all, we have an ubr10k, the problem is low performance in the clients, ping lost and delays in loading pages

we have 4 bundles, in one of the bundles we have 5 ip pools /24.

the test we did was remove 4 of the pools and the problem was solved.

any idea why this can happen ?

I copy the run

nave01cm02#show interfaces bundle 80.100
Bundle80.100 is up, line protocol is up
Hardware is Cable Virtual-bundle interface, address is 68ef.bd85.b0da (bia 0000.0000.0000)
Internet address is 36.3.168.1/22
MTU 1500 bytes, BW 26000 Kbit, DLY 1000 usec,
reliability 255/255, txload 221/255, rxload 49/255
Encapsulation MCNS
ARP type: ARPA, ARP Timeout 04:00:00
Keepalive set (10 sec)
Last clearing of "show interface" counters never

nave01cm02#show running-config interface bundle 85.100
Building configuration...

Current configuration : 544 bytes
!
interface Bundle85.100
ip vrf forwarding ***.***.***
ip address ***.***.***.*** 255.255.255.0 secondary
ip address 36.3.251.33 255.255.255.224 secondary
ip address ***.***.***.*** 255.255.255.0 secondary
ip address ***.***.***.*** 255.255.255.0 secondary
ip address ***.***.***.*** 255.255.255.0 secondary
ip address 36.3.196.1 255.255.252.0 secondary
ip address ***.***.***.*** 255.255.255.0 secondary
ip address 36.3.192.1 255.255.252.0
cable dhcp-giaddr primary
cable helper-address 190.220.191.36
cable helper-address 190.220.191.37
end

nave01cm02#show running-config interface bundle 85
Building configuration...

Current configuration : 111 bytes
!
interface Bundle85
no ip address
cable arp filter request-send 3 2
cable arp filter reply-accept 3 2
end

nave01cm02#sh
nave01cm02#show run
nave01cm02#show running-config int
nave01cm02#show running-config interface cab
nave01cm02#show running-config interface cable 8/1/0
Building configuration...

Current configuration : 2251 bytes
!
interface Cable8/1/0
downstream Integrated-Cable 8/1/0 rf-channel 0-3
cable mtc-mode
no cable packet-cache
cable default-phy-burst 0
cable bundle 85
cable upstream max-ports 4
cable upstream bonding-group 1
upstream 1
upstream 2
upstream 3
attributes A0000000
cable upstream 0 connector 0 shared
cable upstream 0 frequency 20800000
cable upstream 0 channel-width 3200000 3200000
cable upstream 0 threshold corr-fec 5
cable upstream 0 load-balance group 20
cable upstream 0 docsis-mode atdma
cable upstream 0 minislot-size 2
cable upstream 0 range-backoff 3 6
cable upstream 0 modulation-profile 223
cable upstream 0 attribute-mask 20000000
no cable upstream 0 shutdown
cable upstream 1 connector 0 shared
cable upstream 1 frequency 25600000
cable upstream 1 channel-width 6400000 6400000
cable upstream 1 threshold snr-profiles 24 0
cable upstream 1 threshold corr-fec 5
cable upstream 1 load-balance group 20
cable upstream 1 docsis-mode atdma
cable upstream 1 minislot-size 1
cable upstream 1 range-backoff 3 6
cable upstream 1 modulation-profile 224 223
cable upstream 1 attribute-mask 20000000
no cable upstream 1 shutdown
cable upstream 2 connector 2 shared
cable upstream 2 frequency 32000000
cable upstream 2 channel-width 6400000 6400000
cable upstream 2 threshold snr-profiles 24 0
cable upstream 2 threshold corr-fec 5
cable upstream 2 load-balance group 20
cable upstream 2 docsis-mode atdma
cable upstream 2 minislot-size 1
cable upstream 2 range-backoff 3 6
cable upstream 2 modulation-profile 224 223
cable upstream 2 attribute-mask 20000000
no cable upstream 2 shutdown
cable upstream 3 connector 2 shared
cable upstream 3 frequency 38400000
cable upstream 3 channel-width 6400000 6400000
cable upstream 3 threshold snr-profiles 24 0
cable upstream 3 threshold corr-fec 5
cable upstream 3 load-balance group 20
cable upstream 3 docsis-mode atdma
cable upstream 3 minislot-size 1
cable upstream 3 range-backoff 3 6
cable upstream 3 modulation-profile 224 223
cable upstream 3 attribute-mask 20000000

thanks a lot

What's the bandwidth utilization on your CMTS uplink(s)?
What's the bandwidth utilization on your downstream channels (64q or 256q)?
What's the bandwidth utilization on your upstream channels?

Perhaps when you removed those pools you cpe count reduced, thus less bandwidth being utilized across the DS and uplinks.