Order posts by limited to posts

19 Jun 11:30:03
Details
19 Jun 11:28:43
We are seeing around 1-2% packet loss on the MAIDA VALE Exchange. This has been reported to the TSO team within BT Wholesale.
Broadband Users Affected 0.50%
Started 19 Jun 11:26:46
Update was expected 21 Jun 13:00:00
Previously expected 21 Jun 13:00:00

2 Jun 13:51:26
Details
25 May 22:24:55
We're seeing peak time (evening) congestion on lines at BERMONDSEY exchange again. It started on evening of 29th April. We've reported this previously, on 14 Jan and was fixed then on the 15 Jan. We'll update this post shortly.
Update
2 Jun 13:51:26
No updates as yet. We are chasing TT again today.
Update
5 Jun 09:18:13
Sadly no update, we'll chase this via alternate channels!
Started 25 May 22:23:31

4 May 21:52:34
Details
4 May 21:41:59
We are are seeing congestion on HORNDEAN and WATERLOOVILLE exchanges (Hampshire). This is usually noticeable in the evenings. This will be reported to BT, and we'll update this post with updates.
Started 4 May 21:38:34

10 Jun 02:21:49
Details
10 Jun 10:26:47
It seems that one of our BT links had an issue over night. It was only one of the four links we have and only BT services that were affected. Lines reconnected. If you have not reconnected, try power cycling your router. We don't have any explanation from BT as yet.
Started 10 Jun 02:03:19
Closed 10 Jun 02:21:49

12 May 10:27:29
Details
5 May 10:21:17
BT have had trouble with this exchange (http://aastatus.net/2052) but we are now seeing evening congestion on TalkTalk connected lines. We have reported this and will update this post accordingly.
Update
5 May 12:38:47
TalkTalk have fixed a misconfiguration at their end. This should now be resolved. We'll check again tomorrow.
Update
7 May 08:55:48
Lines still seem to be congested, this is being looked in to by TalkTalk.
Update
13 May 12:13:36
Update from TT 'This should now properly be resolved. A faulty interface meant that we were running at 2/3 of capacity - only a few of your subscribers would have suffered last night hopefully but should now all be good.'
Started 5 May 10:00:00
Closed 12 May 10:27:29

25 May 21:30:00
Details
25 May 21:45:20
From around 6:30pm there was IPv6 routing problems to some hosts via our Linx peering - notably to Google.
Resolution A temporary workaround was applied at 9:30pm and routing has been restored. Staff were alerted to this via customers using the 'MSO' SMS facility, however, do to individual staff circumstances it was not until 9pm that staff were able to respond. We do apologise for the time this took, it is very rare for all the staff that are alerted in this way to all be unavailable at the same time. We'll consider what can be done to improve this.
Started 25 May 18:30:00
Closed 25 May 21:30:00

13 May 12:16:45
Details
26 Mar 09:53:31

Over the past couple of weeks we have seen FTTC lines drop and reconnect with in increase in latency of around 15ms. This is seen on the monitoring graphs as a thicker blue line.

Upon first glance it looks as if interleaving has been enabled, but a line test shows that this is not the case.

We've been in contact with BT and it does look like BT are rolling out a new profile on to their Huawei DSLAMs in the local green cabinets. It has been expected that BT would be rolling out this new profile, but we didn't expect such an increase in latency.

The profile adds 'Physical Retransmission (ReTX) technology (G.INP / ITU G.998.4)' which helps with spikes of electromagnetic interference and can make lines more stable.

We would hope to have control over the enabling and disabling of this profile, but we don't. Line profiles with FTTC is managed by BT Openreach and are tricky for us and even BT Wholesale to get adjusted.

We're still discussing this with BT and will update this post with news as we have it.

Update
26 Mar 09:55:29
Update
26 Mar 10:48:37
This has been escalated to the head of fibre deployment within BT Wholesale and we are expecting an update by the end of the day.
Update
26 Mar 11:12:08
Further information about G.INP:
  • http://www.ispreview.co.uk/index.php/2015/01/bt-enables-physical-retransmission-g-inp-fttc-broadband-lines.html
  • http://www.thinkbroadband.com/news/6789-impulse-noise-protection-rolling-out-on-openreach-vdsl2.html
  • http://forum.kitz.co.uk/index.php?topic=15099.0
...among others.
Update
27 Mar 16:46:22
BT have asked us for further information which we have provided to them, we don't expect an update now until Monday
Update
9 Apr 14:26:19
This is still ongoing with BT Wholesale and BT Openreach.
Update
16 Apr 15:58:53
This has been escalated to a very senior level within BT and we are expecting a proper update in the next few days.
Update
24 Apr 13:01:45
We have just received the below back from BT on this

Following communications from a small number of BTWholesale FTTC comms providers regarding Openreach’s implementation of Retransmission and the identification of some your customers who have seen increased latency on some lines for some applications since retransmission was applied. Over the last 4 weeks I have been pushing Openreach to investigate,

feedback and provide answers and options related to this issue. As a result attached is a copy of a briefing from Openreach; sent to their CPs today, on how RetX works and what may have caused this increase latency.

This info is being briefed to all BTWholesale customers via our briefing on Saturday morning 25/4/15 but as you have contacted me direct I’m sending this direct as well as providing ann opportunity to participate in a trial.

Openreach have also advise me this afternoon that they intend to run a trial next week (w/c 25/4/15) on a small set of lines; where devices aren’t retransmission compatible in the upstream to see if changing certain parameters removes the latency and maintains the other benefits of retransmission. The exact date lines will be trialled has yet to be confirmed.

However, they have asked if I have any end users who would like to be included in this trail. To that end if you have particular lines you’d like to participate in this trial please can you provide the DN for the service by 17:00 on Monday 28th April so I can get them included.

This is a trial of a solution and should improve latency performance but there is a risk that there may be changes to the headline rate.

Update
5 May 22:28:25
Update to Trial here: https://aastatus.net/2127
Started 26 Mar 09:00:00
Closed 13 May 12:16:45

23 Apr 2014 10:21:03
Details
01 Nov 2013 15:05:00
We have identified an issue that appears to be affecting some customers with FTTC modems. The issue is stupidly complex, and we are still trying to pin down the exact details. The symptoms appear to be that some packets are not passing correctly, some of the time.

Unfortunately one of the types of packet that refuses to pass correctly are FireBrick FB105 tunnel packets. This means customers relying on FB105 tunnels over FTTC are seeing issues.

The work around is to remove the ethernet lead to the modem and then reconnect it. This seems to fix the issue, at least until the next PPP restart. If you have remote access to a FireBrick, e.g. via WAN IP, and need to do this you can change the Ethernet port settings to force it to re-negotiate, and this has the same effect - this only works if directly connected to the FTTC modem as the fix does need the modem Ethernet to restart.

We are asking BT about this, and we are currently assuming this is a firmware issue on the BT FTTC modems.

We have confirmed that modems re-flashed with non-BT firmware do not have the same problem, though we don't usually recommend doing this as it is a BT modem and part of the service.

Update
04 Nov 2013 16:52:49
We have been working on getting more specific information regarding this, we hope to post an update tomorrow.
Update
05 Nov 2013 09:34:14
We have reproduced this problem by sending UDP packets using 'Scapy'. We are doing further testing today, and hope to write up a more detailed report about what we are seeing and what we have tested.
Update
05 Nov 2013 14:27:26
We have some quite good demonstrations of the problem now, and it looks like it will mess up most VPNs based on UDP. We can show how a whole range of UDP ports can be blacklisted by the modem somehow on the next PPP restart. It is crazy. We hope to post a little video of our testing shortly.
Update
05 Nov 2013 15:08:16
Here is an update/overview of the situation. (from http://revk.www.me.uk/2013/11/bt-huawei-fttc-modem-bug-breaking-vpns.html )

We have confirmed that the latest code in the BT FTTC modems appears to have a serious bug that is affecting almost anyone running any sort of VPN over FTTC.

Existing modems seem to be upgrading, presumably due to a roll out of new code in BT. An older modem that has not been on-line a while is fine. A re-flashed modem with non-BT firmware is fine. A working modem on the line for a while suddenly stopped working, presumably upgraded.

The bug appears to be that the modem manages to "blacklist" some UDP packets after a PPP restart.

If we send a number of UDP packets, using various UDP ports, then cause PPP to drop and reconnect, we then find that around 254 combinations of UDP IP/ports are now blacklisted. I.e. they no longer get sent on the line. Other packets are fine.

Sending 500 different packets, around 254 of them will not work again after the PPP restart. It is not actually the first or last 254 packets, some in the middle, but it seems to be 254 combinations. They work as much as you like before the PPP restart, and then never work after it.

We can send a batch of packets, wait 5 minutes, PPP restart, and still find that packets are now blacklisted. We have tried a wide range of ports, high and low, different src and dst ports, and so on - they are all affected.

The only way to "fix" it, is to disconnect the Ethernet port on the modem and reconnect. This does not even have to be long enough to drop PPP. Then it is fine until the next PPP restart. And yes, we have been running a load of scripts to systematically test this and reproduce the fault.

The problem is that a lot of VPNs use UDP and use the same set of ports for all of the packets, so if that combination is blacklisted by the modem the VPN stops after a PPP restart. The only way to fix it is manual intervention.

The modem is meant to be an Ethernet bridge. It should not know anything about PPP restarting or UDP packets and ports. It makes no sense that it would do this. We have tested swapping working and broken modems back and forth. We have tested with a variety of different equipment doing PPPoE and IP behind the modem.

BT are working on this, but it is a serious concern that this is being rolled out.
Update
12 Nov 2013 10:20:18
Work on this in still ongoing... We have tested this on a standard BT retail FTTC 'Infinity' line, and the problem cannot be reproduced. We suspect this is because when the PPP re-establishes a different IP address is allocated each time, and whatever is session tracking does not match the new connection.
Update
12 Nov 2013 11:08:17

Here is an update with some a more specific explanation as to what the problem we are seeing is:

On WBC FTTC, we can send a UDP packet inside the PPP and then drop the PPP a few seconds later. After the PPP re-establishes, UDP packets with the same source and destination IP and ports won't pass; they do not reach the LNS at the ISP.

Further to that, it's not just one src+dst IP and port tuple which is affected. We can send 254 UDP packets using different src+dest ports before we drop the PPP. After it comes back up, all 254 port combinations will fail. It is worth noting here that this cannot be reproduced on an FTTC service which allocates a dynamic IP which changes each time PPP re-established.

If we send more than 254 packets, only 254 will be broken and the others will work. It's not always the first 254 or last 254, the broken ones move around between tests.

So it sounds like the modem (or, less likely, something in the cab or exchange) is creating state table entries for packets it is passing which tie them to a particular PPP session, and then failing to flush the table when the PPP goes down.

This is a little crazy in the first place. It's a modem. It shouldn't even be aware that it's passing PPPoE frames, let along looking inside them to see that they are UDP.

This only happens when using an Openreach Huawei HG612 modem that we suspect has been recently remotely and automatically upgraded by Openreach in the past couple of months. Further - a HG612 modem with the 'unlocked' firmware does not have this problem. A HG612 modem that has probably not been automatically/remotely upgraded does not have this problem.

Side note: One theory is that the brokenness is actually happening in the street cab and not the modem. And that the new firmware in the modem which is triggering it has enabled 'link-state forwarding' on the modem's Ethernet interface.

Update
27 Nov 2013 10:09:42
This post has been a little quiet, but we are still working with BT/Openreach regarding this issue. We hope to have some more information to post in the next day or two.
Update
27 Nov 2013 10:10:13
We have also had reports from someone outside of AAISP reproducing this problem.
Update
27 Nov 2013 14:19:19
We have spent the morning with some nice chaps from Openreach and Huawei. We have demonstrated the problem and they were able to do traffic captures at various points on their side. Huawei HQ can now reproduce the problem and will investigate the problem further.
Update
28 Nov 2013 10:39:36
Adrian has posted about this on his blog: http://revk.www.me.uk/2013/11/bt-huawei-working-with-us.html
Update
13 Jan 2014 14:09:08
We are still chasing this with BT.
Update
03 Apr 2014 15:47:59
We have seen this affect SIP registrations (which use 5060 as the source and target)... Customers can contact us and we'll arrange a modem swap.
Update
23 Apr 2014 10:21:03
BT are in the process of testing an updated firmware for the modems with customers. Any customers affected by this can contact us and we can arrange a new modem to be sent out.
Update
7 May 22:56:52
Just a side note on this, we're seeing the same problem on the ZyXEL VMG1312 router which we are teting out and which uses the same chipset: info and updates here: https://support.aa.net.uk/VMG1312-Trial
Resolution BT are testing a fix in the lab and will deploy in due course, but this could take months. However, if any customers are adversely affected by this bug, please let us know and we can arrange for BT to send a replacement ECI modem instead of the Huawei modem. Thank you all for your patience.

--Update--
BT do have a new firmware that they are rolling out to the modems. So far it does seem to have fixed the fault and we have not heard of any other issues as of yet. If you do still have the issue, please reboot your modem, if the problem remains, please contact support@aa.net.uk and we will try and get the firmware rolled out to you.
Started 25 Oct 2013
Closed 23 Apr 2014 10:21:03

7 May 09:53:49
Details
09 Dec 2014 11:20:04
Some lines on the LOWER HOLLOWAY exchange are experiencing peak time packet loss. We have reported this to BT and they are investigating the issue.
Update
11 Dec 2014 10:46:42
BT have passed this to TSO for investigation. We are waiting for a further update.
Update
12 Dec 2014 14:23:56
BT's Tso are currently investigating the issue.
Update
16 Dec 2014 12:07:31
Other ISPs are seeing the same problem. The BT Capacity team are now looking in to this.
Update
17 Dec 2014 16:21:04
No update to report yet, we're still chasing BT...
Update
18 Dec 2014 11:09:46
The latest update from this morning is: "The BT capacity team have investigated and confirmed that the port is not being over utilized, tech services have been engaged and are currently investigating from their side."
Update
19 Dec 2014 15:47:47
BT are looking to move our affected circuits on to other ports.
Update
13 Jan 10:28:52
This is being escalated further with BT now, update to follow
Update
19 Jan 12:04:34
This has been raised as a new reference as the old one was closed. Update due by tomorrow AM
Update
20 Jan 12:07:53
BT will be checking this further this evening so we should have more of an update by tomorrow morning
Update
22 Jan 09:44:47
An update is due by the end of the day
Update
22 Jan 16:02:24
This has been escalated further with BT, update probably tomorrow now
Update
23 Jan 09:31:23
we are still waiting for a PEW to be relayed to us. BT will be chasing this for us later on in the day.
Update
26 Jan 09:46:03
BT are doing a 'test move' this evening where they will be moving a line onto another VLAN to see if that helps with the load, if that works then they will move the other affected lines onto this VLAN. Probably Wednesday night.
Update
26 Jan 10:37:45
there will be an SVLAN migration to resolve this issue on Wednesday 28th Jan.
Update
30 Jan 09:33:57
Network rearrangement is happening on Sunday so we will check again on Monday
Update
2 Feb 14:23:12
Network rearrangement was done at 2AM this morning, we will check for paclet loss and report back tomorrow.
Update
3 Feb 09:46:49
We are still seeing loss on a few lines - I am not at all happy that BT have not yet resolved this. A further escalation has been raised with BT and an update will follow shortly.
Update
4 Feb 10:39:03
Escalated futher with an update due at lunch time
Update
11 Feb 14:14:58
We are getting extremly irritated with BT on this one, it should not take this long to add extra capaity in the affected area. Rocket on it's way to them now ......
Update
24 Feb 12:59:54
escalated further with BT, update due by the end of the day.
Update
2 Mar 09:57:59
We only have a few customers left showing peak time packet loss and for now the fix will be to move them onto another MSAN, I am hoping this will be done in the next few days. We really have been pushing BT hard on this and other areas where we are seeing congestion. I am please that there are now only a handful of affected customers left.
Update
17 Mar 11:21:33
We have just put a boot up BT on this, update to follow.
Update
2 Apr 13:16:10
BT have still not fixed the fault so we have moved some of the affected circuits over to TalkTalk and I am pleased to say that we are not seieng loss on those lines. 100% this is a BT issue and I am struggling to understand why they have still not tracked the fault down.
Closed 7 May 09:53:49
Previously expected 1 Feb 09:34:04 (Last Estimated Resolution Time from AAISP)

7 May 09:52:18
Details
11 Mar 11:39:17
We are seeing some evening time congestion on all BT 21CN lines that connect through BRAS's 21CN-BRAS-RED1-MR-DH up to 21CN-BRAS-RED13-MR-DH I suspect one of the BT nodes is hitting limits some evenings as we don't see the higher latency every night. This has been reported into BT and we will update this past as soon as they respond back.
Update
11 Mar 11:44:06
Here is an example graph
Update
12 Mar 12:00:45
This has been escalated further to the BT networK guys and we can expect an update within the next few hours.
Update
17 Mar 15:41:18
Work was done on this overnight so I will check again tomorrow morming and post another update.
Update
18 Mar 11:38:25
The changes BT made over night have made a significant difference to the latency, still seeing it slightly higher than we would like so we will go back to then again.
Update
19 Mar 14:54:44
Unfortunately the latency has increased again so whatever BT did two nights ago has not really helped. We are chasing again now.
Update
23 Mar 14:07:53
BT have still not pinpointed the issue so it has been escalated further.
Update
27 Mar 13:03:38
Latency is hardly noticeable now but we are still chasing BT on sorting the actual issue, update will be mOnday now.
Update
30 Mar 10:04:14
BT have advised that they are aware of the congestion issue at Manchester, and the solution they have in place is to install some additional edge routers, they are already escalating on this to bring the date in early, currently the date is May. Obviously May is just not acceptable and we are doing all we can to get BT to bring this date forward.
Update
2 Apr 12:28:39
We have requested a further escalation within BT, the time scales they have given for a fix is just not acceptable.
Update
13 Apr 15:12:23
The last update from BT was 'is latency issue has been escalated to high level. BT TSO are currently working on a resolution and are hoping to move into the testing phase soon. We will keep you updated as we get more information' I am chasing for another update now.
Update
16 Apr 16:01:15
We are still chasing BT up on bringing the 'fix' forward. Hopefully we will have another response by the morning.
Update
21 Apr 13:25:21
The latest update from BT: We have identified a solution to the capacity issue identified and are looking to put in a solution this Friday night...
Update
24 Apr 15:25:51
BT have added more capacity on tehir network and last night the latency looked fine. We will review this again on Monday.
Started 11 Mar 01:35:37
Closed 7 May 09:52:18

29 Apr 15:23:27
Details
29 Apr 14:43:36
A third of our BT lines bliped - this looks to be an issue with routing on one of our LNSs in to BT.
Update
29 Apr 14:50:18
Many lines are failing to reconnect properly, we are investigating this.
Update
29 Apr 14:57:42
Lines are connecting successfully now
Update
29 Apr 15:23:27
The bulk of lines are back onlne. There are a small number of lines that are still failing to reconnect. These are being looked in to.
Update
29 Apr 15:36:54
The remain lines are reconnecting successfully now.
Resolution I wanted to try and explain more about what happened today, but it is kind of tricky without saying "Something crazy in the routing to/from BT".

We did, in fact make a change - something was not working with our test LNS and a customer needed to connect. We spotted that, for some unknown reason, the routing used a static route internally instead of one announced by BGP, for just one of the four LNSs, and that on top of that the static route was wrong, hence the test LNS not working via that LNS. It made no sense, and as all three other LNSs were configured sensibly we changed the "A" LNS to be the same, after all, this is clearly a config that just worked and was no problem, or so it seemed.

Things went flappy, but we could not see why. It looks like BGP in to BT was flapping, so people connected and disconnected rather a lot. We returned the config and things seemed to be fixed for most people, but not quite all. This made no sense. Some people are connecting and going on line, and then falling off line.

The "fix" to that was to change the endpoint LNS IP address used by BT to an alias on the same LNS. We have done this in the past where BT have had a faulty link in a LAG. We wonder if this issue was "lurking" and the problem we created showed it up. This shows that there was definitely an issue in BT somehow as the fix should not have made any difference otherwise.

What is extra special is that this looks like it has happened before - the logs suggest the bodge of a static route was set up in 2008, and I have this vague recollection of a mystery flappiness like this which was never solved.

Obviously I do apologise for this, and having corrected the out of data static route this should not need touching again, but damn strange.

Started 29 Apr 14:38:00
Closed 29 Apr 15:23:27
Previously expected 29 Apr 14:50:00

25 Apr 18:46:00
Details
25 Apr 18:48:19
There was an unexpected blip in routing - we are looking in to it.
Started 25 Apr 18:44:00
Closed 25 Apr 18:46:00
Previously expected 25 Apr 22:46:00

17 Apr 15:54:16
Details
15 Apr 13:16:15
Some customers on the Bradwell Abbey exchange are currently experiencing an outage. We have received reports from FTTP customers, however this may also affect customers using other services. BT have advised that they are currently awaiting a delivery for a new card at this exchange. We will chase BT for updates and provide them as we receive them.
Update
15 Apr 15:47:41
I have requested a further update from BT.
Update
16 Apr 08:07:15
Openreach AOC and PTO are investigating further at this time. We will reach out for an update later today.
Update
16 Apr 10:32:55
BT have advised that a Cable down is the root cause at this time.
Update
16 Apr 15:51:50
PTO are still onsite. I have asked for an ECD, however OpenReach are not supplying that information, due to being fibre work.
Update
17 Apr 10:19:33
OpenReach have stated, they are hoping for a completion on the fibre today and resource is being tasked out. OpenReach have stated this is only an estimate and not set in stone.
Update
17 Apr 14:49:46
Some customers are reporting a restored service. BT advise that teams are still on site to resolve this P1 issue.
Update
17 Apr 15:55:25
The cable down issue affecting customers using the Bradwell Abbey exchange has now been resolved.
Started 15 Apr 12:55:00 by AAISP Staff
Closed 17 Apr 15:54:16
Cause BT

16 Apr 16:00:24
Details
27 Mar 14:03:52
We are seeing packet loss on all lines connected through 21cn-BRAS-RED8-SL the loss is all through the day/night started 10:08 on the 25th. This has been reported to BT
Update
27 Mar 14:07:22
Here is an example graph:
Update
30 Mar 14:37:04
BT claimed to have fixed this but our monotoring is still seeing the loss, BT chased further
Broadband Users Affected 0.01%
Closed 16 Apr 16:00:24

16 Apr 15:59:33
Details
2 Feb 10:10:46
We are seeing low level packet loss on BT lines connected to the Wapping exchange - approx 6pm to 11pm every night. Reported to BT...
Update
2 Feb 10:13:57
Here is an example graph:
Update
3 Feb 15:55:40
Thsi has been escalated further with BT
Update
4 Feb 10:27:37
Escalated further with BT, update due after lunch
Update
11 Feb 14:18:00
Still not fixed, we are arming yet another rocket to fire at BT smiley
Update
24 Feb 12:58:51
escalated further with BT, update due by the end of the day.
Update
2 Mar 10:00:11
Again the last few users seeing packet loss will be moved onto another MSAN in the next few days.
Update
12 Mar 12:02:57
Updatew expected in the next few hours
Update
17 Mar 11:19:48
A further escalation has been raised on this, update by the end of the day
Update
30 Mar 15:35:32
This has been escalated to the next level
Broadband Users Affected 0.09%
Started 2 Feb 10:09:12 by AAISP automated checking
Closed 16 Apr 15:59:33

2 Apr 16:02:22
Details
17 Mar 12:38:27
We are seeing higher than normal evening time latency on the Wrexham exchange, it is not every night but it does suggest BT are running another congested link. This has been reported to them and we will update thia as and when they get back to us.
Update
17 Mar 12:41:51
Here is an example graph:
Update
20 Mar 14:36:18
It has looked better the last two eveings but it's still being investigated as the BT links were probably less busy.
Broadband Users Affected 0.01%
Started 15 Mar 12:36:07 by AAISP Staff
Closed 2 Apr 16:02:22

2 Apr 11:57:32
Details
1 Apr 10:00:06
Some customers connected through Gloucestershire are affected by an ongoing TalkTalk major service outage. Details below: Summary

Network monitoring initially identified total loss of service to all customers connected to 2 exchanges in the Gloucester area. Our NOC engineers re-routed impacted traffic whilst virgin media engineers carried out preliminary investigations. Virgin media restoration work subsequently resulted in several major circuits in the Gloucester area to fail.

This has resulted in a variety of issues for multiple customers connected to multiple exchanges. Our NOC engineers have completed re-routing procedures to restore service for some customers with other customers continuing to experience total loss of service due to capacity limitations. Impact: Tigworth, Witcombe and Painswick exchanges

Hardwicke and Barnwood exchanges – experiencing congestion related issues.

Cheltenham and Churchdown – experiencing congestion related issues.

experiencing congestion related issues Stroud, Stonehouse, Whitecroft, Blakeney, Lydney, Bishops Cleeve, Winchcombe, Tewkesbury, Bredon exchanges.

Update
1 Apr 10:31:30
TT have advised that splicing of the affected fibre is still ongoing. There is no further progressive updates at this time. Further updates will be sent out shortly.
Update
2 Apr 11:57:26
Root cause analysis identified a major Virgin Media fibre break due to third party contractor damage as being the cause of this incident. Service was fully restored when Virgin Media Fibre engineers spliced new fibre. Following this we received confirmation that service had returned as BAU. TalkTalk customers would have been automatically rerouted and would have experienced only a momentary Loss of Service. An observation period has been carried out verifying network stability and as no further issues have been reported this incident will be closed with further investigations into the cause being tracked via the Problem Management process
Closed 2 Apr 11:57:32

27 Mar 09:00:00
Details
25 Mar 21:48:13
Since the 24th March we have been seeing congestion on TalkTalk lines on the Shepherds Bush exchange. This has been reported to TalkTalk. Example graph:
Update
26 Mar 10:51:48
TalkTalk say they have fixed this. We'll be checking overnight to be sure smiley
Update
26 Mar 22:27:33
Lines are looking good.
Resolution

We had this feedback from TalkTalk regarding this congestion issue:

Shepherds Bush has three GigE backhauls to two different BNG's - there was some a software process failure and restart on one of these devices on Tuesday morning which had two of the three backhauls homed to it. As a result all customers redialled to the one 'working' BNG in the exchange - normally when this happens we will calculate whether or not the backhaul can handle that number of customers and if not manually intervene, in this case however a secondary knock on issue meant that our DHCP based customers (FTTC subs) were sent through the same backhaul and the calculation was inaccurate.

If the PPP session was restarted they would have reconnected on their normal BNG and everything should be OK - we've just made this change manually moving subscribers over - still have a couple of lines on the backup BNG so will monitor if there are any issues and take any necessary actions to resolve.

Started 24 Mar 17:00:00
Closed 27 Mar 09:00:00

17 Mar 11:18:55
Details
20 Jan 12:53:37
We are seeing low level packet loss on some BT circuits connected to the EUSTON exchange, this has been raised with BT and as soon as we have an update we will post an update here.
Update
20 Jan 12:57:32
Here is an example graph:
Update
22 Jan 09:02:48
We are due an update on this one later this PM
Update
23 Jan 09:36:21
BT are chasing this and we are due an update at around 1:30PM.
Update
26 Jan 09:41:39
Work was done over night on the BT side to move load onto other parts of the network, we will check this again this evening and report back.
Update
27 Jan 10:33:05
We are still seeing lines with evening packet loss but BT don't appear to understand this and after spending the morning arguing with them they have agreed to investigate further. Update to follow.
Update
28 Jan 09:35:28
Update from BT due this PM
Update
29 Jan 10:33:57
Bt are again working on this but no further updates will be given until tomorrow morning
Update
3 Feb 16:19:06
This one has also been escalated further with BT
Update
4 Feb 10:18:11
BT have identified a fault within their network and we have been advised that an update will be given after lunch today
Update
11 Feb 14:16:56
Yet another rocket on it's way to BT
Update
24 Feb 12:59:20
escalated further with BT, update due by the end of the day.
Update
2 Mar 09:59:19
STill waiting for BT to raise an emergency PEW, the PEW (planned engineering work) will sort the last few lines where we are seeing peak time packet loss)
Update
12 Mar 12:03:57
I need to check this tonight as bT think it is fixed, I will post an update back tomorrow
Broadband Users Affected 0.07%
Started 10 Jan 12:51:26 by AAISP automated checking
Closed 17 Mar 11:18:55
Previously expected 21 Jan 16:51:26

6 Mar 13:00:00
Details
5 Mar 10:54:01
We are seeing quite a few lines on the Durham and NEW BRANCEPETH exchange with a connection problem. Customers may have no internet access and their router constantly logging in and out.

We have reported this fault to BT and they are investigating. It looks like the BRAS has a fault.

Update
5 Mar 16:25:28
We have been chasing this with BT throughout the day; their tech team are still investigating.
Update
6 Mar 09:16:19
We still have a few lines off and we are on the phone to BT now chasing this. Update to follow.
Update
6 Mar 13:48:38
BT appear to have a broken LAG (link aggregation group) and as a work around we have had to change one of our end point IP addresses and the affected customers are back on line. This is just a work around and hopefully BT will shortly fix their end.
Started 5 Mar 02:00:00
Closed 6 Mar 13:00:00

4 Mar 10:03:07
Details
2 Mar 06:17:18
Many of our customer broadband lines suffered a blip just after 2am. We're still investigating that, but it seems our RADIUS accounting got behind due to the high number of lines flapping. It looks like accounting has caught up now, but it will mean that we sent out some delayed notifications over night. This could result in, for example, line down/up notification emails delayed by several hours. The time stamp in the notification should show if this is the case.
Closed 4 Mar 10:03:07

24 Feb 12:57:31
Details
11 Feb 10:17:36
We are seeing evening congestion on the Wrexham exchange, also off that two other BRAS's are affected. They are: 21CN-BRAS-RED6-SF 21CN-BRAS-RED7-SF Customers can check which BRAS/exchange they are connected to from our control pages
Update
11 Feb 10:27:08
Here is an example graph:
Update
13 Feb 11:39:14
We are chasing BT for an update and as soon as we have further news we wil update this post.
Update
16 Feb 10:20:43
It looks like the peak time latency just went away Thursday evening with no report from BT that they actually changed something. We will continue monitoring for the next few days to ensure it really has gone away.
Broadband Users Affected 0.05%
Started 11 Feb 10:12:08 by AA Staff
Closed 24 Feb 12:57:31

5 Feb 13:07:52
Details
8 Jan 15:44:04
We are seeing some levels of congestion in the evening on the following exchanges: BT COWBRIDGE, BT MORRISTON, BT WEST (Bristol area), BT CARDIFF EMPIRE, BT THORNBURY, BT EASTON, BT WINTERBOURNE, BT FISHPONDS, BT LLANTWIT MAJOR. These have been reported to BT and they are currently investigating.
Update
8 Jan 15:56:59
He is an example graph:
Update
9 Jan 15:21:53
BT have been chased further on this as they have not provided an update as promised.
Update
9 Jan 16:19:48
We did not see any congestion over night on the affected circuits but we will continue monitoring all affected lines and post another update on Monday.
Update
12 Jan 10:37:32
We are still seeing congestion on the Exchanges listed above between the hours of 20:00hrs and 22:30hrs. We have updated BT and are awaiting their reply.
Update
20 Jan 12:52:05
We are now seeing congestion starting from 19:30 to 22:30 on these exchanges. We are awaiting an update from BT.
Update
21 Jan 11:13:44
BT have sent this into the TSO team, we are to await their investigation results. We will provide another update as soon as we have a reply.
Update
22 Jan 09:06:14
An update is expected on this tomorrow
Update
23 Jan 09:33:48
This one is still being investigated at the moment, and may need a card or fiber cable fitting, Will chase this for an update later on in the day.
Broadband Users Affected 0.30%
Started 8 Jan 15:40:15 by AA Staff
Closed 5 Feb 13:07:52

29 Jan 10:07:29
Details
27 Jan 11:48:41
We are currently seeing congestion in the evening between the hours of 8PM and 11PM on the following BRASs: BRAS 21CN-BRAS-RED4-CF-C, BRAS 21CN-ACC-ALN12-CF-C, BRAS 21CN-BRAS-RED8-CF-C. We have raised this into BT and their Estimated completion date is: 29-01-2015 11:23 We will update you as soon as we have some more information.
Update
27 Jan 12:02:35
Here is an example graph:
Resolution Nothing back from BT but we suspect they have increased capacity accorss the links, any further news on this we will update the post.
Started 27 Jan 11:44:56
Closed 29 Jan 10:07:29

24 Jan 08:17:21
Details
23 Jan 08:35:26
In addition to all of the BT issues we have ongoing (and affecting all ISPs), we have seen some signs of congestion in the evening last night - this is due to planned switch upgrade work this morning. Normally we aim not to be the bottleneck, as you know, but we have moved customers on to half of our infrastructure to facilitate the switch change, and this puts us right on the limit for capacity at peak times. Over the next few nights we will be redistributing customers back on to the normal arrangement of three LNSs with one hot spare, and this will address the issue. Hopefully we have enough capacity freed up to avoid the issue tonight. Sorry for any inconvenience. Longer term we have more LNSs planned as we expand anyway.
Update
24 Jan 07:30:14
The congestion was worse last night, and the first stage of moving customers back to correct LNSs was done over night. We are completing this now (Saturday morning) to ensure no problems this evening.
Resolution Lines all moved to correct LNS so there should be no issues tonight.
Started 22 Jan
Closed 24 Jan 08:17:21
Previously expected 24 Jan 08:30:00

28 Jan 09:38:34
Details
4 Jan 09:45:22
We are seeing evening congestion on the Bristol North exchange, incident has been raised with BT and they are investigating.
Update
19 Jan 09:51:48
Here is an example graph:
Update
22 Jan 08:58:26
The fault has been escalated further and we are expected an update on this tomorrow.
Update
23 Jan 09:37:14
No Irams/Pew has been issued yet, and no further updates this morning. We are chasing BT. Update is expected around 1:30PM today.
Update
26 Jan 09:36:18
BT are due to update us on this after 3Pm today
Update
26 Jan 13:24:05
BT are looking to change the SFP port on the BRAS, we are chasing time scales on this now.
Update
26 Jan 14:16:43
This work will take place between 02:00 and 06:00 tomorrow morning
Update
27 Jan 09:23:21
Chasing BT to confirm the work was done over night, update to follow
Update
27 Jan 11:25:18
Nope, the work was postponed to this evening so we won't know whether they have fixed it until Wednesday evening. We will see.....
Update
28 Jan 09:38:34
Wow. Another BT congested link has been fixed over night.
Resolution BT changed the SFP port on the BRAS
Broadband Users Affected 0.01%
Started 4 Jan 09:45:22
Closed 28 Jan 09:38:34
Previously expected 29 Jan 13:23:24

28 Jan 09:25:23
Details
21 Jan 09:44:42
Our minitoring has picked up further congestion within the BT network causing high latency between 6pm-11pm every night on the following BRAS's. This is affecting BT lines only and in the Bristol and South/South west Wales areas. 21CN-BRAS-RED3-CF-C 21CN-BRAS-RED6-CF-C An incident has been raised with BT and we will update this post as and when we have updates.
Update
21 Jan 09:47:51
Here is an example graph:
Update
22 Jan 08:46:12
We are expecting a resolution on the tomorrow - 2015-01-23
Update
23 Jan 09:35:26
This one is still with the Adhara NOC team. They are trying to solve the congestion problems. Target resolution is today 23/1/15, we have no specific time frame so we will update you as soon as we have more information from BT.
Update
26 Jan 10:03:08
we are expecting an update on this later this afternoon
Update
26 Jan 16:23:32
BT are seeing some errors on slot 7 on one of the 7750’s, they looking to swap it over this evening, then they will monitor it, will update you once I get any further update.
Update
27 Jan 09:20:48
We are checking with BT whether or not a change was made over night.
Update
28 Jan 09:25:17
BT have actually cleared the congestion. We will monitor this very closely though.
Broadband Users Affected 0.03%
Started 4 Jan 18:00:00 by AA Staff
Closed 28 Jan 09:25:23
Previously expected 28 Jan 09:20:53

27 Jan 15:45:12
Details
27 Jan 13:45:04
There appears to be a problem with one of BT's BRAS's (21CN-BRAS-RED3-BM-TH) where customers are unable to connect. We are speaking to BT about this now and will update this post ASAP.
Update
27 Jan 13:52:57
BT 'tech services' are aware and dealing as we speak ....
Update
27 Jan 13:54:12
There are engineers on site already!
Update
27 Jan 15:45:36
WOW. BT have fixed their BRAS fault in record time. smiley
Broadband Users Affected 0.01%
Started 27 Jan 12:19:21
Closed 27 Jan 15:45:12
Previously expected 27 Jan 17:42:21

22 Jan 09:48:14
Details
13 Jan 12:17:05
We are seeing low level packet loss on the Hunslet exchange (BT tails) this has been reported to BT. All of our BT tails connected to the Hunslet exchange are affected.
Update
13 Jan 12:27:11
Here is an example graph:
Update
15 Jan 11:50:15
Having chased BT up they have promised us an update by the end of play today.
Update
16 Jan 09:07:51
Bt have identified a card fault within their network. We are just waiting for conformation as to when it will be fixed.
Update
19 Jan 09:31:11
It appears this is now resolved - well BT have added extra capacity on the link: "To alleviate congestion on acc-aln2.ls-bas -10/1/1 the OSPF cost on the backhauls in area 8.7.92.17 to acc-aln1.bok and acc-aln1.hma have been temporarily adjusted to 4000 from 3000. This has brought traffic down by about 10 to 15 % - and should hopefully avoid the over utilisation during peak"
Resolution Work has been completed on the BT network to alleviate traffic
Broadband Users Affected 0.01%
Started 11 Jan 12:14:28 by AAISP Pro Active Monitoring Systems
Closed 22 Jan 09:48:14

10 Jan 20:00:00
Details
10 Jan 19:44:03
Since 19:20 we have seen issues on all TalkTalk backhaul lines. Investigating
Update
10 Jan 20:08:08
Looks to be recovering
Update
10 Jan 21:32:01
Most lines are up as of 8pm. We'll investigate the cause of this.
Started 10 Jan 19:20:00
Closed 10 Jan 20:00:00

12 Dec 2014 11:00:40
Details
11 Dec 2014 10:42:15
We are seeing some TT connected lines with packetloss starting at 9AM yesterday and today. The loss lasts until 10AM and then there continues a low amount of loss. We have reported this to TalkTalk
Update
11 Dec 2014 10:46:34
This is the pattern of loss we are seeing:
Update
12 Dec 2014 12:00:04
No loss has been seen on these lines today. We're still chasing TT for any update though.
Resolution The problem went away... TT were unable to find the cause.
Broadband Users Affected 7%
Started 11 Dec 2014 09:00:00
Closed 12 Dec 2014 11:00:40

11 Dec 2014 14:15:00
Details
11 Dec 2014 14:13:58
BT issue affecting SOHO AKA GERRARD STREET 21CN-ACC-ALN1-L-GER. we have reported to this BT and they are now investigating.
Update
11 Dec 2014 14:19:33
BT are investigating, however the circuits are mostly back online.
Started 11 Dec 2014 13:42:11 by AAISP Pro Active Monitoring Systems
Closed 11 Dec 2014 14:15:00
Previously expected 11 Dec 2014 18:13:11 (Last Estimated Resolution Time from AAISP)

02 Dec 2014 09:05:00
Details
01 Dec 2014 21:54:24
All FTTP circuits on Bradwell Abbey have packetloss. This started at about 23:45 on 30th November. This is affecting other ISPs too. BT did have an Incident open, but this has been closed. They restarted a line card last night, but it seems the problem has been since the card was restarted. We are chasing BT.
Example graph:
Update
01 Dec 2014 22:38:39
It has been a struggle to get the front line support and the Incident Desk at BT to accept that this is a problem. We have passed this on to our Account Manager and other contacts within BT in the hope of a speedy fix.
Update
02 Dec 2014 07:28:40
BT have tried doing something overnight, but the packetloss still exists at 7am 2nd December. Our monitoring shows:
  • packet loss it stops at 00:30
  • The lines go off between 04:20 and 06:00
  • The packet loss starts again at 6:00 when they come back onine.
We've passed this on to BT.
Update
02 Dec 2014 09:04:56
Since 7AM today, the lines have been OK... we will continue to monitor.
Started 30 Nov 2014 23:45:00
Closed 02 Dec 2014 09:05:00

03 Dec 2014 09:44:00
Details
27 Nov 2014 16:31:03
We are seeing what looks like congestion on the Walworth exchange. Customers will be experiencing high latency, packetloss and slow throughput in the evenings and weekends. We have reported this to TalkTalk.
Update
02 Dec 2014 09:39:27
Talk Talk are still investigating this issue.
Update
02 Dec 2014 12:22:04
The congestion issue has been discovered on Walworth Exchange and Talk Talk are in the process of traffic balancing.
Update
03 Dec 2014 10:30:14
Capacity has been increased and the exchange is looking much better now.
Started 27 Nov 2014 16:28:35
Closed 03 Dec 2014 09:44:00

19 Nov 2014 16:20:46
Details
19 Nov 2014 15:11:12
Lonap (one of the main Internet peering points in the UK) has a problem. We have stopped passing traffic over Lonap. Customers may have seen packetloss for a short while, but routing should be OK now. We are monitoring the traffic and will bring back Lonap when all is well.
Update
19 Nov 2014 16:21:29
The Lonap problem has been fixed, and we've re-enabled our peering.
Started 19 Nov 2014 15:00:00
Closed 19 Nov 2014 16:20:46

21 Nov 2014 00:18:00
Details
21 Nov 2014 10:58:09
We have a number of TT lines down all on the same RAS: HOST-62-24-203-36-AS13285-NET. We are chasing this with TalkTalk.
Update
21 Nov 2014 11:01:29
Most lines are now back. We have informed TalkTalk.
Update
21 Nov 2014 12:18:22
TT have come back to us. They were aware of the problem, it was caused by a software problem on an LTS.
Started 21 Nov 2014 10:45:00
Closed 21 Nov 2014 00:18:00

25 Nov 2014 10:43:46
Details
21 Oct 2014 14:10:19
We're seeing congestion from 10am up to 11:30pm across the BT Rose Street, PIMLICO and the High Wycombe exchange. A fault has been raised with BT and we will post updates as soon as we can. Thanks for your patience.
Update
28 Oct 2014 11:23:44
Rose Street and High Wycombe are now clear. Still investigating Pimlico
Update
03 Nov 2014 14:41:45
Pimlico has now been passed to BT's capacity team to deal with . Further capacity is needed and will be added asap. We will provide updates as soon as it's available.
Update
05 Nov 2014 10:12:30
We have just been informed by the BT capacity team that end users will be moved to a different VLAN on Friday morning. We will post futher updates when we have them.
Update
11 Nov 2014 10:23:59
Most of the Pimlico exchange is now fixed. Sorry for the delay.
Update
19 Nov 2014 11:01:57
There is further planned work on the Pimlico exchange for the 20th November. This should resolve the congestion on the Exchange.
Update
25 Nov 2014 10:44:43
Pimlico lines are now running as expected. Thanks for your patience.
Started 21 Oct 2014 13:31:50
Closed 25 Nov 2014 10:43:46

04 Nov 2014 16:47:11
Details
04 Nov 2014 09:42:18
Several graphs have been missing in recent weeks, some days, and some LNSs. This is something we are working on. Unfortunately, today, one of the LNSs is not showing live graphs again, and so these will not be logged over night. We hope to have a fix for this in the next few days. Sorry for any inconvenience.
Resolution The underlying cause has been identified and will be deployed over the next few days.
Started 01 Oct 2014
Closed 04 Nov 2014 16:47:11
Previously expected 10 Nov 2014

01 Nov 2014 11:35:11
[Broadband] - Blip - Closed
Details
01 Nov 2014 11:55:38
There appears to be something of a small DoS attack which resulted in a blip around 11:29:16 today, and caused some issues with broadband lines and other services. We're looking in to this at present and graphs are not currently visible on one of the LNSs for customers.
Update
01 Nov 2014 13:09:44
We expect graphs on a.gormless to be back tomorrow morning after some planned work.
Resolution Being investigated further.
Started 01 Nov 2014 11:29:16
Closed 01 Nov 2014 11:35:11

29 Sep 2014 22:37:36
Details
21 Aug 2014 12:50:32
Over the past week or so we have been missing data on some monitoring graphs, this is shown as purple for the first hour in the morning. This is being caused by delays in collecting the data. This is being looked in to.
Resolution We believe this has been fixed now. We have been monitoring it for a fortnight after making an initial fix, and it looks to have been successful.
Closed 29 Sep 2014 22:37:36

20 Sep 2014 07:09:09
Details
20 Sep 2014 11:59:13
RADIUS account is behind at the moment. This is causing the usage data to appear as missing from customer lines. The accounting is behind, but it's not broken, and is catching up. The usage data doesn't appear to be lost, and should appear later in the day.
Update
21 Sep 2014 08:12:52
Records have now caught up.
Closed 20 Sep 2014 07:09:09
Previously expected 20 Sep 2014 15:57:11

26 Aug 2014 09:15:00
Details
26 Aug 2014 09:02:02
Yesterday's and today's line graphs are not being shown at the moment. We are working on restoring this.
Update
26 Aug 2014 09:42:18
Today's graphs are back, yesterdays are lost though.
Started 26 Aug 2014 08:00:00
Closed 26 Aug 2014 09:15:00

01 Sep 2014 19:42:08
Details
01 Sep 2014 19:42:56
c.gormless rebooted, lines moved to other LNS automatically. We are investigating.
Broadband Users Affected 33%
Started 01 Sep 2014 19:39:19
Closed 01 Sep 2014 19:42:08

13 Aug 2014 09:15:00
Details
13 Aug 2014 11:26:08
Due to a radius issue we were not receiving line statistics from just after midnight. As a result we needed to force lines to login again. This would have caused lines to lose their PPP connection and then reconnect at around 9AM. We apologise for this, and will be investigating the cause.
Started 13 Aug 2014 09:00:00
Closed 13 Aug 2014 09:15:00

08 Aug 2014 15:25:00
Details
08 Aug 2014 15:42:28
At 15:15 we saw customer on the 'D' LNS's lose their connection and reconnect a few moments later. The cause of this is being looked in to.
Resolution Lines quickly came back online, we apologise for the drop though. The cause will be investigated.
Started 08 Aug 2014 15:15:00
Closed 08 Aug 2014 15:25:00

01 Aug 2014 10:00:00
Details
27 Jul 2014 21:00:00
We saw what looks to be congestion on some lines on the Rugby exchange (BT lines). This shows a slight packet loss on Sunday evening. We'll report this to BT.
Update
30 Jul 2014 11:03:08
Card replaced early hours this morning, which should have fixed the congestion problems.
Started 27 Jul 2014 21:00:00
Closed 01 Aug 2014 10:00:00

28 Jul 2014 11:00:00
Details
28 Jul 2014 09:20:03
Customers may have seen a drop and reconnect of their broadband lines this morning. Due to a problem with our RADIUS accounting on Sunday we have needed to restart our customer database server, Clueless. This has been done, and Clueless is back online. Due to the initial problem with RADIUS accounting most DSL lines have had to be restarted.
Update
28 Jul 2014 10:02:13
We are also sending out order update messages in error - eg, emails about orders that have already completed. We apologise for this confusing and are investigating this.
Started 28 Jul 2014 09:00:00
Closed 28 Jul 2014 11:00:00

17 Jul 2014 17:45:00
Details
17 Jul 2014 16:23:15
We have a few reports from customers, and a vague Incident report from BT that looks like there may be PPP problem within the BT network which is affecting customers logging in to us. Customers may see their ADSL router in sync, but not able to log in (no PPP).
Update
17 Jul 2014 16:40:31
This looks to be affecting BT ADSL and FTTC circuits. A line which tries to log in may well fail.
Update
17 Jul 2014 16:42:34
Some lines are logging in successfully now.
Update
17 Jul 2014 16:54:15
Not all lines are back yet, but lines are still logging back in, so if you are still offline it may take a little more time.
Resolution This was a BT incident, reference IMT26151/14. This was closed by BT at 17:45 without giving us further details about what the problem was or what they did to restore service.
Started 17 Jul 2014 16:00:00
Closed 17 Jul 2014 17:45:00

11 Jul 2014 11:03:55
Details
11 Jul 2014 17:00:48
The "B" LNS restarted today, unexpectedly. All lines reconnected within minutes (however fast the model retries). We'll clear some traffic off the "D" server back to the "B" server later this evening.
Resolution We're investigating the cause of this.
Broadband Users Affected 33%
Started 11 Jul 2014 11:03:52
Closed 11 Jul 2014 11:03:55

01 Jul 2014 23:25:00
Details
01 Jul 2014 20:50:32
We have identified some TalkTalk back haul lines with congestion starting around 16:20 and now 100ms with 2% loss. This affects around 3% of our TT lines.

We have techies in TalkTalk on the case and hope to have it resolved soon.

Update
01 Jul 2014 20:56:19
"On call engineers are being scrambled now - we have an issue in the wider Oxford area and you should see an incident coming through shortly."
Resolution Engineers fixed the issue last night.
Started 01 Jul 2014 16:20:00
Closed 01 Jul 2014 23:25:00
Previously expected 02 Jul 2014