What we ran into was-- if we had 8 channels in 5/1/0:0, the 8-CH modems would work and the 4-CH would lock 1DS (without the bonding-group-sec); however, if we made the 5/1/0:0 have 4 channels, the 8-CH & 4-CH would lock 4 channels each. It's almost as if the 4-CH modems are seeing the first wideband group :0, saying "I can't do 8..." and locking onto a legacy Integrated-Cable 1 channel.
How we have been testing:
!
interface Integrated-Cable5/1/0:0
cable bundle 20
cable rf-bandwidth-percent 25 remaining ratio 100
!
.....etc.....
!
interface Wideband-Cable5/1/0:0
cable bundle 20
cable rf-channel 0 bandwidth-percent 10 remaining ratio 100
cable rf-channel 1 bandwidth-percent 10 remaining ratio 100
cable rf-channel 2 bandwidth-percent 10 remaining ratio 100
cable rf-channel 3 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 1 channel 0 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 1 channel 1 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 1 channel 2 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 1 channel 3 bandwidth-percent 10 remaining ratio 100
!
interface Wideband-Cable5/1/0:1
cable bundle 20
cable rf-channel 0 bandwidth-percent 10 remaining ratio 100
cable rf-channel 1 bandwidth-percent 10 remaining ratio 100
cable rf-channel 2 bandwidth-percent 10 remaining ratio 100
cable rf-channel 3 bandwidth-percent 10 remaining ratio 100
!
interface Wideband-Cable5/1/0:2
cable bundle 20
cable rf-channel controller 1 channel 0 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 1 channel 1 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 1 channel 2 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 1 channel 3 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 2 channel 0 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 2 channel 1 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 2 channel 2 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 2 channel 3 bandwidth-percent 10 remaining ratio 100
!
interface Wideband-Cable5/1/0:3
cable bundle 20
cable rf-channel controller 2 channel 0 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 2 channel 1 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 2 channel 2 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 2 channel 3 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 3 channel 0 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 3 channel 1 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 3 channel 2 bandwidth-percent 10 remaining ratio 100
cable rf-channel controller 3 channel 3 bandwidth-percent 10 remaining ratio 100
!
**we were testing load balancing across a card, by sending 20 freqs to one area, unfortunately link budget will only allow about 8 nodes to be serviced in this one-card setup, instead of the 25-30 we have currently.
Right off the bat, I see you need a four-channel logical group for *each* controller, instead of just the first one.
I think you want dynamic bw sharing turned on also
example:
interface Wideband-Cable1/0:0
cable bundle 1
cable bonding-group-id 1
cable dynamic-bw-sharing
cable rf-channel 0 bandwidth-percent 1 remaining ratio 100
cable rf-channel 1 bandwidth-percent 1 remaining ratio 100
cable rf-channel 2 bandwidth-percent 1 remaining ratio 100
cable rf-channel 3 bandwidth-percent 1 remaining ratio 100
cable rf-channel controller 1 channel 0 bandwidth-percent 1 remaining ratio 100
cable rf-channel controller 1 channel 1 bandwidth-percent 1 remaining ratio 100
cable rf-channel controller 1 channel 2 bandwidth-percent 1 remaining ratio 100
cable rf-channel controller 1 channel 3 bandwidth-percent 1 remaining ratio 100
!
interface Wideband-Cable1/0:1
cable bundle 1
cable bonding-group-id 2
cable dynamic-bw-sharing
cable rf-channel 0 bandwidth-percent 1 remaining ratio 100
cable rf-channel 1 bandwidth-percent 1 remaining ratio 100
cable rf-channel 2 bandwidth-percent 1 remaining ratio 100
cable rf-channel 3 bandwidth-percent 1 remaining ratio 100
!
interface Wideband-Cable1/0:2
cable bundle 1
cable bonding-group-id 3
cable dynamic-bw-sharing
cable rf-channel controller 1 channel 0 bandwidth-percent 1 remaining ratio 100
cable rf-channel controller 1 channel 1 bandwidth-percent 1 remaining ratio 100
cable rf-channel controller 1 channel 2 bandwidth-percent 1 remaining ratio 100
cable rf-channel controller 1 channel 3 bandwidth-percent 1 remaining ratio 100
!
interface Integrated-Cable1/0:0
cable bundle 1
cable dynamic-bw-sharing
cable rf-bandwidth-percent 20 remaining ratio 100
!
interface Integrated-Cable1/0:1
cable bundle 1
cable dynamic-bw-sharing
cable rf-bandwidth-percent 20 remaining ratio 100
!
interface Integrated-Cable1/0:2
cable bundle 1
cable dynamic-bw-sharing
cable rf-bandwidth-percent 20 remaining ratio 100
!
interface Integrated-Cable1/0:3
cable bundle 1
cable dynamic-bw-sharing
cable rf-bandwidth-percent 20 remaining ratio 100
!
< snip >
!
interface Wideband-Cable1/1:0
shutdown
cable bonding-group-id 7
cable dynamic-bw-sharing
!
interface Integrated-Cable1/1:0
cable bundle 1
cable dynamic-bw-sharing
cable rf-bandwidth-percent 20 remaining ratio 100
!
interface Integrated-Cable1/1:1
cable bundle 1
cable dynamic-bw-sharing
cable rf-bandwidth-percent 20 remaining ratio 100
!
interface Integrated-Cable1/1:2
cable bundle 1
cable dynamic-bw-sharing
cable rf-bandwidth-percent 20 remaining ratio 100
!
interface Integrated-Cable1/1:3
cable bundle 1
cable dynamic-bw-sharing
cable rf-bandwidth-percent 20 remaining ratio 100
!
You said you were tinkering with 20 DS in one mac domain
That is do-able, however you are limited to 8 US per mac domain, so such a config wouldn't be very efficient use of 20x20 hardware
Since we are using the 12.2(33)SCE4 version of IOS, we have this by default:
*Starting with Cisco IOS Release 12.2(33)SCE, the DBS mode is enabled by default, on the WB/MC/IC interfaces.
*Starting with Cisco IOS Release 12.2(33)SCE, the cable bonding-group-secondary command replaces the cable bonding-group-id command. If you upgrade from an earlier Cisco IOS Release to Cisco IOS Release 12.2(33)SCE and later, the cable bonding-group-id command will no longer change the bonding-group ID.
So....
> **we were testing load balancing across a card, by sending 20 freqs to one area, unfortunately link budget
> will only allow about 8 nodes to be serviced in this one-card setup, instead of the 25-30 we have currently.
Sorry, but I'm having a little bit of trouble following what you are trying to achieve.
If I understanding you correctly :
* you have MC20X20V linecards
* you want to support 4X and 8X DS bonding
* your lab config was testing sending all 20 DS from 1 linecard to 1 mac-domain ?
* your production config is going to run a more typical 4 mac-domains per linecard ?
( Note: "mac-domain" sometimes aka "service area". Or from the CMTS running-config point of view a single "interface cable x/y")
Please explain exactly what scenario you are wanting to implement, and it won't be too hard for us to provide a suggested config to you.
The 20x20 cards are very flexible but the downside of that is you have to have multiple parts of the running-config 100% correct for it to work (controller integrated-cable, interface cable, interface integrated-cable, interface wideband-cable, cable fiber-node. And maybe also cable load-balance)