John Linton It seems a very long time ago now that we realised that P2P usage had the ability to wreck the plan models we believed were what were required in the future broadband market and set out to find short, medium and long term solutions to those very real dangers. In fact it was two and a half years ago (four and a half years if you count our initial 'free time' initiative) - which in this business can be a lifetime for a small communications company being much longer than most start ups actually last.
We have had our share of issues, problems, set backs and every other problem inherent in pioneering anything and there have been more than a few times that I felt like giving up and changing the models and objectives. Many of the problems we encountered were clearly our fault (lack of knowledge/lack of planning/lack of patience/lack of......) but many more were associated with deficiencies or shortcomings in the deliveries of the 'promises' made by the hardware and firmware providers.
Irrepective of who or what caused the various problems at the various times we have gradually managed to gain a much better understanding of the elements of P2P in terms of network provisioning and we have gained a great deal of knowledge in how to deal with P2P in the future using a range of hardware and software 'tools' that didn't exist 30 months ago.
Our initial 'protection' against P2P downloads swamping peak time bandwidth was the introduction (from the second month of our being in business) of the 'free' off peak period which encourages our customers to use low bandwidth usage times of the night to start file downloads. We have modified the conditions and time frames of this period over the last almost five years and have got it progressively 'more right'.
Our seond ' bandwidth control 'phase' was to install 'bleeding edge' hardware to just identify P2P 'packets' via DPI (deep packet inspection) and restrict the amount of bandwidth the network made available for P2P protocols. This was very successful, from our point of view, initially as it reduced the amount of raw IP bandwidth we needed to buy to handle total customer volume by slowing down P2P downloads and effectively spreading them over the less used time periods.
However the software was not perfect and, due to the necessity of keeping up with the constant changes the P2P developers kept making to their encrypton and other processes we went through periods of 'mis-identifying' other protocols as P2P which resulted in some customer dissatisfaction while we waited to get the fixes and, doubtless, cost us a number of customers. These phases lasted too long and were commercially unacceptable but, although we almost reached the point of abandoning the use of the DPI equipment we persevered and have not had a repeat of these issues for almost 12 months now.
Our third 'phase', which we started almost a year ago was to install another 'bleeding edge' piece of hardware that cached P2P traffic thus, in theory, eliminating some of the raw IP bandwidth but still using the customer connectivity bandwidth which meant that it didn't produce the same level of savings as simply reducing total network bandwidth spend for P2P but did reduce the most expensive component of it and, a major plus, actually sped up the delivery of the most popular P2P files.
One, major, drawback of this box was it actually required us to more rapidly upgrade the 'customer side' of the network bandwidth which, for ADSL2 12 months ago, was as expensive as raw IP bandwidth for Optus ADSL2. However contract re-negotiation partially addressed that issue and our ongoing knowledge increase and broadening skill set began to address the other issues.
Which brings us to today - the start of the fourth (of now five) phases in re-engineering our network to eliminate the 'curse of P2P' and to both further reduce the cost of providing P2P data (which now acounts for over 75% of all data on our network) and to remove any 'speed constraints' that we have previously had to apply to P2P. Over the next few months we will 'reverse' the way we have managed P2P by using the Allot boxes to 'protect' all non P2P traffic by dynamically ensuring that all types of the end to end (customer - data source - customer) non P2P traffic has more than enough bandwidth at all times to ensure no 'saturation' can ever occur while at the same time using the raw IP bandwidth savings an 'unrestricted' P2P cache generates to buy more 'customer side' (and therefore much lower cost at the moment) bandwidth to handle the increase in P2P traffic that will now be generated.
That's the theory and, doubtless, as with the previous three phases it will take time to make 100% effective but it is a sensible way of operating a network that has so much strain imposed by P2P capabilities in an ADSL2 environment.
Fingers crossed as the first phase begins today.