We have a Casa C3200 in our lab environment, with 1 modem connected to it (8 DS / 4 US channel bonding). We are sending traffic in both the US and DS directions for testing.
The DS direction seems to pass traffic fine with a latency average of about 4ms (up to around 300Mbps).
However, when sending US traffic the latency average is around 70ms (higher at higher US rates). We get some frames at 100ms+ at around 270Mbps.
Originally, we had issues with not all of our US traffic frames passing (the US channel utilizations were dropping very low after 30 seconds of running traffic), but applying a "service-class" profile to the US config seemed to solve this issue. Now we don't get many dropped frames - but the latencies are still extremely high.
We must be missing either some Casa setting or configuration file setting that is causing this, we are imagining. Any help would be greatly appreciated - does anyone know settings that may cause this behavior?
If anything is needed to make anything more clear, please let us know.
Latency on US is usually controlled by upstream modulation profile. Latency stacks up when there is a bad SNR and FEC/CER percentage rises. Then the modulation profile is in control of how many packets it will try to fix and how many it will drop. You can't just copy paste the modulation profile from Cisco but it's not that hard to configure it the same way. if you don't know how to check your FEC/CER ratios you can use my github project: https://github.com/l-n-monitoring/CMTS-Monitoring
Hi, thank you for your response! We believe that the SNR and FEC were within the proper ranges in our setup. We actually were able to get appropriate latencies and no frame drop by changing a "target buffer size" property in the service-class on the Casa (set to around 20Mb).
There are quite a few other settings in this service-class that we have yet to experiment with, but the "targ-buff-size" property seems to be very sensitive, and depending on the value it can drastically change our latencies and frame drop percentage.
Would you or anyone have any insight for common settings that are used in production for these types of settings? Currently we are trying to work out a configuration that might be more of a real-world use case. Below is the output of the current upstream service-class that we have applied:
application-class: 0
Traffic Priority :0
Maximum Sustained rate : 300000 kbps
Maximum Burst : 4194303 bytes
Minimum Reserved rate : 70000 kbps
Minimum Packet Size : 0 bytes
Peak Traffic Rate : 0 kbps
Admitted QoS Timeout : 200 seconds
Active QoS Timeout : 0 seconds
Maximum Concatenated Burst : 65535 bytes
max-buff-size : 40000000 bytes
min-buff-size : 15000000 bytes
tar-buff-size : 21000000 bytes
Scheduling Type : Best Effort
Request/Transmission policy : 0x0
IP ToS Overwrite[AND-mask, OR-mask] : 0xff, 0x0
Current Throughput : 0 kbps, 0 packets/sec
Contention request : 48
Piggyback request : 1
Grants scheduled : 49
Grants not used : 0
Token bucket(min reserved bytes) : 4286217296
Packet received : 37
Bytes received : 7506 (bytes)
v6 Packets rcvd : 0
v6 Bytes rcvd : 0 (bytes)
Packet dropped : 4
HCS errors : 0
CRC errors : 0
Is your upstream test congesting all the upstream channels on this segment?
If so, it is normal to see 60ms+ latency for all modems on that segment.
Once the US scheduler on CMTS is full, data starts to queue in the modems, and latency will jump up.
Yes, you can mess around with the buffer settings, but this is "black magic" which I would not recommend. Every modem is different and I reckon you will find it impossible to come up with a good setting that works for everyone.
Better off to try and avoid congestion eg
1/ Don't reserve traffic (get rid of your minimum reserved rate 70000)
2/ Aim to run your channels for highest capacity : 6.4 width, 64 QAM (requires a clean plant)
3/ Add more channels
4/ Restrict users who use excessive data (eg "Subscriber Traffic Management" feature on Cisco CMTS)
etc
In the past I heard of issues with buffer bloat an Casa/Arris CMTS.
But this issue should be resolved on newer firmwares.
Cisco never had a buffer bloat problem from my experience.