Order posts by limited to posts

23 Jan 09:37:14
Details
4 Jan 09:45:22
We are seeing evening congestion on the Bristol North exchange, incident has been raised with BT and they are investigating.
Update
19 Jan 09:51:48
Here is an example graph:
Update
22 Jan 08:58:26
The fault has been escalated further and we are expected an update on this tomorrow.
Update
23 Jan 09:37:14
No Irams/Pew has been issued yet, and no further updates this morning. We are chasing BT. Update is expected around 1:30PM today.
Broadband Users Affected 0.01%
Started 4 Jan 09:45:22
Update was expected 23 Jan 13:30:00
Previously expected 21 Jan 13:45:22

23 Jan 09:36:21
Details
20 Jan 12:53:37
We are seeing low level packet loss on some BT circuits connected to the EUSTON exchange, this has been raised with BT and as soon as we have an update we will post an update here.
Update
20 Jan 12:57:32
Here is an example graph:
Update
22 Jan 09:02:48
We are due an update on this one later this PM
Update
23 Jan 09:36:21
BT are chasing this and we are due an update at around 1:30PM.
Broadband Users Affected 0.07%
Started 10 Jan 12:51:26 by AAISP automated checking
Update was expected 23 Jan 13:30:00
Previously expected 21 Jan 16:51:26

23 Jan 09:35:26
Details
21 Jan 09:44:42
Our minitoring has picked up further congestion within the BT network causing high latency between 6pm-11pm every night on the following BRAS's. This is affecting BT lines only and in the Bristol and South/South west Wales areas. 21CN-BRAS-RED3-CF-C 21CN-BRAS-RED6-CF-C An incident has been raised with BT and we will update this post as and when we have updates.
Update
21 Jan 09:47:51
Here is an example graph:
Update
22 Jan 08:46:12
We are expecting a resolution on the tomorrow - 2015-01-23
Update
23 Jan 09:35:26
This one is still with the Adhara NOC team. They are trying to solve the congestion problems. Target resolution is today 23/1/15, we have no specific time frame so we will update you as soon as we have more information from BT.
Broadband Users Affected 0.03%
Started 4 Jan 18:00:00 by AA Staff
Update expected Today 10:30:00
Previously expected 23 Jan 13:40:45

23 Jan 09:33:48
Details
8 Jan 15:44:04
We are seeing some levels of congestion in the evening on the following exchanges: BT COWBRIDGE, BT MORRISTON, BT WEST (Bristol area), BT CARDIFF EMPIRE, BT THORNBURY, BT EASTON, BT WINTERBOURNE, BT FISHPONDS, BT LLANTWIT MAJOR. These have been reported to BT and they are currently investigating.
Update
8 Jan 15:56:59
He is an example graph:
Update
9 Jan 15:21:53
BT have been chased further on this as they have not provided an update as promised.
Update
9 Jan 16:19:48
We did not see any congestion over night on the affected circuits but we will continue monitoring all affected lines and post another update on Monday.
Update
12 Jan 10:37:32
We are still seeing congestion on the Exchanges listed above between the hours of 20:00hrs and 22:30hrs. We have updated BT and are awaiting their reply.
Update
20 Jan 12:52:05
We are now seeing congestion starting from 19:30 to 22:30 on these exchanges. We are awaiting an update from BT.
Update
21 Jan 11:13:44
BT have sent this into the TSO team, we are to await their investigation results. We will provide another update as soon as we have a reply.
Update
22 Jan 09:06:14
An update is expected on this tomorrow
Update
23 Jan 09:33:48
This one is still being investigated at the moment, and may need a card or fiber cable fitting, Will chase this for an update later on in the day.
Broadband Users Affected 0.30%
Started 8 Jan 15:40:15 by AA Staff
Update was expected 23 Jan 13:30:00

23 Jan 09:31:23
Details
09 Dec 2014 11:20:04
Some lines on the LOWER HOLLOWAY exchange are experiencing peak time packet loss. We have reported this to BT and they are investigating the issue.
Update
11 Dec 2014 10:46:42
BT have passed this to TSO for investigation. We are waiting for a further update.
Update
12 Dec 2014 14:23:56
BT's Tso are currently investigating the issue.
Update
16 Dec 2014 12:07:31
Other ISPs are seeing the same problem. The BT Capacity team are now looking in to this.
Update
17 Dec 2014 16:21:04
No update to report yet, we're still chasing BT...
Update
18 Dec 2014 11:09:46
The latest update from this morning is: "The BT capacity team have investigated and confirmed that the port is not being over utilized, tech services have been engaged and are currently investigating from their side."
Update
19 Dec 2014 15:47:47
BT are looking to move our affected circuits on to other ports.
Update
13 Jan 10:28:52
This is being escalated further with BT now, update to follow
Update
19 Jan 12:04:34
This has been raised as a new reference as the old one was closed. Update due by tomorrow AM
Update
20 Jan 12:07:53
BT will be checking this further this evening so we should have more of an update by tomorrow morning
Update
22 Jan 09:44:47
An update is due by the end of the day
Update
22 Jan 16:02:24
This has been escalated further with BT, update probably tomorrow now
Update
23 Jan 09:31:23
we are still waiting for a PEW to be relayed to us. BT will be chasing this for us later on in the day.
Update was expected 23 Jan 15:30:00
Previously expected 19 Dec 2014 15:14:17 (Last Estimated Resolution Time from AAISP)

Saturday 08:17:21
Details
23 Jan 08:35:26
In addition to all of the BT issues we have ongoing (and affecting all ISPs), we have seen some signs of congestion in the evening last night - this is due to planned switch upgrade work this morning. Normally we aim not to be the bottleneck, as you know, but we have moved customers on to half of our infrastructure to facilitate the switch change, and this puts us right on the limit for capacity at peak times. Over the next few nights we will be redistributing customers back on to the normal arrangement of three LNSs with one hot spare, and this will address the issue. Hopefully we have enough capacity freed up to avoid the issue tonight. Sorry for any inconvenience. Longer term we have more LNSs planned as we expand anyway.
Update
Saturday 07:30:14
The congestion was worse last night, and the first stage of moving customers back to correct LNSs was done over night. We are completing this now (Saturday morning) to ensure no problems this evening.
Resolution Lines all moved to correct LNS so there should be no issues tonight.
Started 22 Jan
Closed Saturday 08:17:21
Previously expected Saturday 08:30:00

22 Jan 09:48:14
Details
13 Jan 12:17:05
We are seeing low level packet loss on the Hunslet exchange (BT tails) this has been reported to BT. All of our BT tails connected to the Hunslet exchange are affected.
Update
13 Jan 12:27:11
Here is an example graph:
Update
15 Jan 11:50:15
Having chased BT up they have promised us an update by the end of play today.
Update
16 Jan 09:07:51
Bt have identified a card fault within their network. We are just waiting for conformation as to when it will be fixed.
Update
19 Jan 09:31:11
It appears this is now resolved - well BT have added extra capacity on the link: "To alleviate congestion on acc-aln2.ls-bas -10/1/1 the OSPF cost on the backhauls in area 8.7.92.17 to acc-aln1.bok and acc-aln1.hma have been temporarily adjusted to 4000 from 3000. This has brought traffic down by about 10 to 15 % - and should hopefully avoid the over utilisation during peak"
Resolution Work has been completed on the BT network to alleviate traffic
Broadband Users Affected 0.01%
Started 11 Jan 12:14:28 by AAISP Pro Active Monitoring Systems
Closed 22 Jan 09:48:14

10 Jan 20:00:00
Details
10 Jan 19:44:03
Since 19:20 we have seen issues on all TalkTalk backhaul lines. Investigating
Update
10 Jan 20:08:08
Looks to be recovering
Update
10 Jan 21:32:01
Most lines are up as of 8pm. We'll investigate the cause of this.
Started 10 Jan 19:20:00
Closed 10 Jan 20:00:00

12 Dec 2014 11:00:40
Details
11 Dec 2014 10:42:15
We are seeing some TT connected lines with packetloss starting at 9AM yesterday and today. The loss lasts until 10AM and then there continues a low amount of loss. We have reported this to TalkTalk
Update
11 Dec 2014 10:46:34
This is the pattern of loss we are seeing:
Update
12 Dec 2014 12:00:04
No loss has been seen on these lines today. We're still chasing TT for any update though.
Resolution The problem went away... TT were unable to find the cause.
Broadband Users Affected 7%
Started 11 Dec 2014 09:00:00
Closed 12 Dec 2014 11:00:40

11 Dec 2014 14:15:00
Details
11 Dec 2014 14:13:58
BT issue affecting SOHO AKA GERRARD STREET 21CN-ACC-ALN1-L-GER. we have reported to this BT and they are now investigating.
Update
11 Dec 2014 14:19:33
BT are investigating, however the circuits are mostly back online.
Started 11 Dec 2014 13:42:11 by AAISP Pro Active Monitoring Systems
Closed 11 Dec 2014 14:15:00
Previously expected 11 Dec 2014 18:13:11 (Last Estimated Resolution Time from AAISP)

02 Dec 2014 09:05:00
Details
01 Dec 2014 21:54:24
All FTTP circuits on Bradwell Abbey have packetloss. This started at about 23:45 on 30th November. This is affecting other ISPs too. BT did have an Incident open, but this has been closed. They restarted a line card last night, but it seems the problem has been since the card was restarted. We are chasing BT.
Example graph:
Update
01 Dec 2014 22:38:39
It has been a struggle to get the front line support and the Incident Desk at BT to accept that this is a problem. We have passed this on to our Account Manager and other contacts within BT in the hope of a speedy fix.
Update
02 Dec 2014 07:28:40
BT have tried doing something overnight, but the packetloss still exists at 7am 2nd December. Our monitoring shows:
  • packet loss it stops at 00:30
  • The lines go off between 04:20 and 06:00
  • The packet loss starts again at 6:00 when they come back onine.
We've passed this on to BT.
Update
02 Dec 2014 09:04:56
Since 7AM today, the lines have been OK... we will continue to monitor.
Started 30 Nov 2014 23:45:00
Closed 02 Dec 2014 09:05:00

03 Dec 2014 09:44:00
Details
27 Nov 2014 16:31:03
We are seeing what looks like congestion on the Walworth exchange. Customers will be experiencing high latency, packetloss and slow throughput in the evenings and weekends. We have reported this to TalkTalk.
Update
02 Dec 2014 09:39:27
Talk Talk are still investigating this issue.
Update
02 Dec 2014 12:22:04
The congestion issue has been discovered on Walworth Exchange and Talk Talk are in the process of traffic balancing.
Update
03 Dec 2014 10:30:14
Capacity has been increased and the exchange is looking much better now.
Started 27 Nov 2014 16:28:35
Closed 03 Dec 2014 09:44:00

19 Nov 2014 16:20:46
Details
19 Nov 2014 15:11:12
Lonap (one of the main Internet peering points in the UK) has a problem. We have stopped passing traffic over Lonap. Customers may have seen packetloss for a short while, but routing should be OK now. We are monitoring the traffic and will bring back Lonap when all is well.
Update
19 Nov 2014 16:21:29
The Lonap problem has been fixed, and we've re-enabled our peering.
Started 19 Nov 2014 15:00:00
Closed 19 Nov 2014 16:20:46

21 Nov 2014 00:18:00
Details
21 Nov 2014 10:58:09
We have a number of TT lines down all on the same RAS: HOST-62-24-203-36-AS13285-NET. We are chasing this with TalkTalk.
Update
21 Nov 2014 11:01:29
Most lines are now back. We have informed TalkTalk.
Update
21 Nov 2014 12:18:22
TT have come back to us. They were aware of the problem, it was caused by a software problem on an LTS.
Started 21 Nov 2014 10:45:00
Closed 21 Nov 2014 00:18:00

25 Nov 2014 10:43:46
Details
21 Oct 2014 14:10:19
We're seeing congestion from 10am up to 11:30pm across the BT Rose Street, PIMLICO and the High Wycombe exchange. A fault has been raised with BT and we will post updates as soon as we can. Thanks for your patience.
Update
28 Oct 2014 11:23:44
Rose Street and High Wycombe are now clear. Still investigating Pimlico
Update
03 Nov 2014 14:41:45
Pimlico has now been passed to BT's capacity team to deal with . Further capacity is needed and will be added asap. We will provide updates as soon as it's available.
Update
05 Nov 2014 10:12:30
We have just been informed by the BT capacity team that end users will be moved to a different VLAN on Friday morning. We will post futher updates when we have them.
Update
11 Nov 2014 10:23:59
Most of the Pimlico exchange is now fixed. Sorry for the delay.
Update
19 Nov 2014 11:01:57
There is further planned work on the Pimlico exchange for the 20th November. This should resolve the congestion on the Exchange.
Update
25 Nov 2014 10:44:43
Pimlico lines are now running as expected. Thanks for your patience.
Started 21 Oct 2014 13:31:50
Closed 25 Nov 2014 10:43:46

04 Nov 2014 16:47:11
Details
04 Nov 2014 09:42:18
Several graphs have been missing in recent weeks, some days, and some LNSs. This is something we are working on. Unfortunately, today, one of the LNSs is not showing live graphs again, and so these will not be logged over night. We hope to have a fix for this in the next few days. Sorry for any inconvenience.
Resolution The underlying cause has been identified and will be deployed over the next few days.
Started 01 Oct 2014
Closed 04 Nov 2014 16:47:11
Previously expected 10 Nov 2014

01 Nov 2014 11:35:11
[Broadband] - Blip - Closed
Details
01 Nov 2014 11:55:38
There appears to be something of a small DoS attack which resulted in a blip around 11:29:16 today, and caused some issues with broadband lines and other services. We're looking in to this at present and graphs are not currently visible on one of the LNSs for customers.
Update
01 Nov 2014 13:09:44
We expect graphs on a.gormless to be back tomorrow morning after some planned work.
Resolution Being investigated further.
Started 01 Nov 2014 11:29:16
Closed 01 Nov 2014 11:35:11

29 Sep 2014 22:37:36
Details
21 Aug 2014 12:50:32
Over the past week or so we have been missing data on some monitoring graphs, this is shown as purple for the first hour in the morning. This is being caused by delays in collecting the data. This is being looked in to.
Resolution We believe this has been fixed now. We have been monitoring it for a fortnight after making an initial fix, and it looks to have been successful.
Closed 29 Sep 2014 22:37:36

20 Sep 2014 07:09:09
Details
20 Sep 2014 11:59:13
RADIUS account is behind at the moment. This is causing the usage data to appear as missing from customer lines. The accounting is behind, but it's not broken, and is catching up. The usage data doesn't appear to be lost, and should appear later in the day.
Update
21 Sep 2014 08:12:52
Records have now caught up.
Closed 20 Sep 2014 07:09:09
Previously expected 20 Sep 2014 15:57:11

26 Aug 2014 09:15:00
Details
26 Aug 2014 09:02:02
Yesterday's and today's line graphs are not being shown at the moment. We are working on restoring this.
Update
26 Aug 2014 09:42:18
Today's graphs are back, yesterdays are lost though.
Started 26 Aug 2014 08:00:00
Closed 26 Aug 2014 09:15:00

01 Sep 2014 19:42:08
Details
01 Sep 2014 19:42:56
c.gormless rebooted, lines moved to other LNS automatically. We are investigating.
Broadband Users Affected 33%
Started 01 Sep 2014 19:39:19
Closed 01 Sep 2014 19:42:08

23 Apr 2014 10:21:03
Details
01 Nov 2013 15:05:00
We have identified an issue that appears to be affecting some customers with FTTC modems. The issue is stupidly complex, and we are still trying to pin down the exact details. The symptoms appear to be that some packets are not passing correctly, some of the time.

Unfortunately one of the types of packet that refuses to pass correctly are FireBrick FB105 tunnel packets. This means customers relying on FB105 tunnels over FTTC are seeing issues.

The work around is to remove the ethernet lead to the modem and then reconnect it. This seems to fix the issue, at least until the next PPP restart. If you have remote access to a FireBrick, e.g. via WAN IP, and need to do this you can change the Ethernet port settings to force it to re-negotiate, and this has the same effect - this only works if directly connected to the FTTC modem as the fix does need the modem Ethernet to restart.

We are asking BT about this, and we are currently assuming this is a firmware issue on the BT FTTC modems.

We have confirmed that modems re-flashed with non-BT firmware do not have the same problem, though we don't usually recommend doing this as it is a BT modem and part of the service.

Update
04 Nov 2013 16:52:49
We have been working on getting more specific information regarding this, we hope to post an update tomorrow.
Update
05 Nov 2013 09:34:14
We have reproduced this problem by sending UDP packets using 'Scapy'. We are doing further testing today, and hope to write up a more detailed report about what we are seeing and what we have tested.
Update
05 Nov 2013 14:27:26
We have some quite good demonstrations of the problem now, and it looks like it will mess up most VPNs based on UDP. We can show how a whole range of UDP ports can be blacklisted by the modem somehow on the next PPP restart. It is crazy. We hope to post a little video of our testing shortly.
Update
05 Nov 2013 15:08:16
Here is an update/overview of the situation. (from http://revk.www.me.uk/2013/11/bt-huawei-fttc-modem-bug-breaking-vpns.html )

We have confirmed that the latest code in the BT FTTC modems appears to have a serious bug that is affecting almost anyone running any sort of VPN over FTTC.

Existing modems seem to be upgrading, presumably due to a roll out of new code in BT. An older modem that has not been on-line a while is fine. A re-flashed modem with non-BT firmware is fine. A working modem on the line for a while suddenly stopped working, presumably upgraded.

The bug appears to be that the modem manages to "blacklist" some UDP packets after a PPP restart.

If we send a number of UDP packets, using various UDP ports, then cause PPP to drop and reconnect, we then find that around 254 combinations of UDP IP/ports are now blacklisted. I.e. they no longer get sent on the line. Other packets are fine.

Sending 500 different packets, around 254 of them will not work again after the PPP restart. It is not actually the first or last 254 packets, some in the middle, but it seems to be 254 combinations. They work as much as you like before the PPP restart, and then never work after it.

We can send a batch of packets, wait 5 minutes, PPP restart, and still find that packets are now blacklisted. We have tried a wide range of ports, high and low, different src and dst ports, and so on - they are all affected.

The only way to "fix" it, is to disconnect the Ethernet port on the modem and reconnect. This does not even have to be long enough to drop PPP. Then it is fine until the next PPP restart. And yes, we have been running a load of scripts to systematically test this and reproduce the fault.

The problem is that a lot of VPNs use UDP and use the same set of ports for all of the packets, so if that combination is blacklisted by the modem the VPN stops after a PPP restart. The only way to fix it is manual intervention.

The modem is meant to be an Ethernet bridge. It should not know anything about PPP restarting or UDP packets and ports. It makes no sense that it would do this. We have tested swapping working and broken modems back and forth. We have tested with a variety of different equipment doing PPPoE and IP behind the modem.

BT are working on this, but it is a serious concern that this is being rolled out.
Update
12 Nov 2013 10:20:18
Work on this in still ongoing... We have tested this on a standard BT retail FTTC 'Infinity' line, and the problem cannot be reproduced. We suspect this is because when the PPP re-establishes a different IP address is allocated each time, and whatever is session tracking does not match the new connection.
Update
12 Nov 2013 11:08:17

Here is an update with some a more specific explanation as to what the problem we are seeing is:

On WBC FTTC, we can send a UDP packet inside the PPP and then drop the PPP a few seconds later. After the PPP re-establishes, UDP packets with the same source and destination IP and ports won't pass; they do not reach the LNS at the ISP.

Further to that, it's not just one src+dst IP and port tuple which is affected. We can send 254 UDP packets using different src+dest ports before we drop the PPP. After it comes back up, all 254 port combinations will fail. It is worth noting here that this cannot be reproduced on an FTTC service which allocates a dynamic IP which changes each time PPP re-established.

If we send more than 254 packets, only 254 will be broken and the others will work. It's not always the first 254 or last 254, the broken ones move around between tests.

So it sounds like the modem (or, less likely, something in the cab or exchange) is creating state table entries for packets it is passing which tie them to a particular PPP session, and then failing to flush the table when the PPP goes down.

This is a little crazy in the first place. It's a modem. It shouldn't even be aware that it's passing PPPoE frames, let along looking inside them to see that they are UDP.

This only happens when using an Openreach Huawei HG612 modem that we suspect has been recently remotely and automatically upgraded by Openreach in the past couple of months. Further - a HG612 modem with the 'unlocked' firmware does not have this problem. A HG612 modem that has probably not been automatically/remotely upgraded does not have this problem.

Side note: One theory is that the brokenness is actually happening in the street cab and not the modem. And that the new firmware in the modem which is triggering it has enabled 'link-state forwarding' on the modem's Ethernet interface.

Update
27 Nov 2013 10:09:42
This post has been a little quiet, but we are still working with BT/Openreach regarding this issue. We hope to have some more information to post in the next day or two.
Update
27 Nov 2013 10:10:13
We have also had reports from someone outside of AAISP reproducing this problem.
Update
27 Nov 2013 14:19:19
We have spent the morning with some nice chaps from Openreach and Huawei. We have demonstrated the problem and they were able to do traffic captures at various points on their side. Huawei HQ can now reproduce the problem and will investigate the problem further.
Update
28 Nov 2013 10:39:36
Adrian has posted about this on his blog: http://revk.www.me.uk/2013/11/bt-huawei-working-with-us.html
Update
13 Jan 2014 14:09:08
We are still chasing this with BT.
Update
03 Apr 2014 15:47:59
We have seen this affect SIP registrations (which use 5060 as the source and target)... Customers can contact us and we'll arrange a modem swap.
Update
23 Apr 2014 10:21:03
BT are in the process of testing an updated firmware for the modems with customers. Any customers affected by this can contact us and we can arrange a new modem to be sent out.
Resolution BT are testing a fix in the lab and will deploy in due course, but this could take months. However, if any customers are adversely affected by this bug, please let us know and we can arrange for BT to send a replacement ECI modem instead of the Huawei modem. Thank you all for your patience.

--Update--
BT do have a new firmware that they are rolling out to the modems. So far it does seem to have fixed the fault and we have not heard of any other issues as of yet. If you do still have the issue, please reboot your modem, if the problem remains, please contact support@aa.net.uk and we will try and get the firmware rolled out to you.
Started 25 Oct 2013
Closed 23 Apr 2014 10:21:03

13 Aug 2014 09:15:00
Details
13 Aug 2014 11:26:08
Due to a radius issue we were not receiving line statistics from just after midnight. As a result we needed to force lines to login again. This would have caused lines to lose their PPP connection and then reconnect at around 9AM. We apologise for this, and will be investigating the cause.
Started 13 Aug 2014 09:00:00
Closed 13 Aug 2014 09:15:00

08 Aug 2014 15:25:00
Details
08 Aug 2014 15:42:28
At 15:15 we saw customer on the 'D' LNS's lose their connection and reconnect a few moments later. The cause of this is being looked in to.
Resolution Lines quickly came back online, we apologise for the drop though. The cause will be investigated.
Started 08 Aug 2014 15:15:00
Closed 08 Aug 2014 15:25:00

01 Aug 2014 10:00:00
Details
27 Jul 2014 21:00:00
We saw what looks to be congestion on some lines on the Rugby exchange (BT lines). This shows a slight packet loss on Sunday evening. We'll report this to BT.
Update
30 Jul 2014 11:03:08
Card replaced early hours this morning, which should have fixed the congestion problems.
Started 27 Jul 2014 21:00:00
Closed 01 Aug 2014 10:00:00

28 Jul 2014 11:00:00
Details
28 Jul 2014 09:20:03
Customers may have seen a drop and reconnect of their broadband lines this morning. Due to a problem with our RADIUS accounting on Sunday we have needed to restart our customer database server, Clueless. This has been done, and Clueless is back online. Due to the initial problem with RADIUS accounting most DSL lines have had to be restarted.
Update
28 Jul 2014 10:02:13
We are also sending out order update messages in error - eg, emails about orders that have already completed. We apologise for this confusing and are investigating this.
Started 28 Jul 2014 09:00:00
Closed 28 Jul 2014 11:00:00

17 Jul 2014 17:45:00
Details
17 Jul 2014 16:23:15
We have a few reports from customers, and a vague Incident report from BT that looks like there may be PPP problem within the BT network which is affecting customers logging in to us. Customers may see their ADSL router in sync, but not able to log in (no PPP).
Update
17 Jul 2014 16:40:31
This looks to be affecting BT ADSL and FTTC circuits. A line which tries to log in may well fail.
Update
17 Jul 2014 16:42:34
Some lines are logging in successfully now.
Update
17 Jul 2014 16:54:15
Not all lines are back yet, but lines are still logging back in, so if you are still offline it may take a little more time.
Resolution This was a BT incident, reference IMT26151/14. This was closed by BT at 17:45 without giving us further details about what the problem was or what they did to restore service.
Started 17 Jul 2014 16:00:00
Closed 17 Jul 2014 17:45:00

11 Jul 2014 11:03:55
Details
11 Jul 2014 17:00:48
The "B" LNS restarted today, unexpectedly. All lines reconnected within minutes (however fast the model retries). We'll clear some traffic off the "D" server back to the "B" server later this evening.
Resolution We're investigating the cause of this.
Broadband Users Affected 33%
Started 11 Jul 2014 11:03:52
Closed 11 Jul 2014 11:03:55

01 Jul 2014 23:25:00
Details
01 Jul 2014 20:50:32
We have identified some TalkTalk back haul lines with congestion starting around 16:20 and now 100ms with 2% loss. This affects around 3% of our TT lines.

We have techies in TalkTalk on the case and hope to have it resolved soon.

Update
01 Jul 2014 20:56:19
"On call engineers are being scrambled now - we have an issue in the wider Oxford area and you should see an incident coming through shortly."
Resolution Engineers fixed the issue last night.
Started 01 Jul 2014 16:20:00
Closed 01 Jul 2014 23:25:00
Previously expected 02 Jul 2014

19 Jun 2014 14:33:59
Details
11 Mar 2014 10:11:55
We are seeing multiple exchanges with packet loss over BT wholesale. We are chasing BT on this and will update as and when we have updates. GOODMAYES CANONBURY HAINAULT SOUTHWARK LOUGHTON HARLOW NINE ELMS UPPER HOLLOWAY ABERDEEN DENBURN HAMPTON INGREBOURNE COVENTRY 21CN-BRAS-RED6-SF
Update
14 Mar 2014 12:49:28
This has now been escalated to the next level for further investigation.
Update
17 Mar 2014 15:42:38
BT are now raising faults on each Individual exchange.
Update
21 Mar 2014 10:19:24
Below are the exchanges/RAS which has been fixed by capacity upgrades. We are hoping for the remanding four exchanges to be fixed in the next few days.
HAINAULT
SOUTHWARK
LOUGHTON
HARLOW
ABERDEEN DENBURN
HAMPTON
INGREBOURNE
GOODMAYERS
RAS 21CN-BRAS-RED6-SF
Update
21 Mar 2014 15:52:45
COVENTRY should be resolved later this evening when a new link is installed between Nottingham and Derby. CANONBURY is waiting for CVLAN moves that begin 19/03/2014 and will be competed 01/04/2014.
Update
25 Mar 2014 10:09:23
CANONBURY - Planned Engineering works have taken place on 19.3.14, and there are three more planned 25.3.14 , 26.3.14 and 1.4.14.
COVENTRY - Is now fixed
NINE ELMS and UPPER HOLLOWAY- Still suffering from packet loss and BT are investigating further.
Update
02 Apr 2014 15:27:11
BT are still investigating congestion on Canonbury, Nine Elms and Upper Holloway.
Update
23 Apr 2014 11:45:44
CANONBURY - further PEW's on 7th and 8th May
NINE ELMS - A total of 384 EU’s have been migrated. A further 614 are planned to be migrated in the early hours of the 25/04/14.
UPPER HOLLOWAY - Planned Engineering Work on 28th April
BEULAH HILL and TEWKESBURY - Seeing congestion peak times and Chasing BT on this also.
Update
30 Apr 2014 12:51:24
NINE ELMS - T11003 - Still ongoing investigations for nine elms.
UPPER HOLLOWAY - T11004 - BT are working on this and a resolution should be available soon.
TEWKESBURY - T11200 - This is on the Backhaul list and will be dealt with shortly. Work request closed as no investigation required. BT are working on this and a resolution should be available soon.
MONMOUTH - T11182 - ALS583669 - This was balanced. I have advised BT that this is still not up to standards. They will continue to investigate. This is on the Backhaul Spreadhsheet also. So this is being investigate by capacity.
BEULAH HILL - Being investigated.
Update
02 May 2014 12:45:16
CANONBURY - 580 EU's being migrated on 7th May and 359 EU's on 8th May
NINE ELMS - Emergency Pew PW238650 that will take place in the early hours on the 02/05/14. This is to move 500 circuits off 4 ISPV's onto other IPSV's.
UPPER HOLLOWAY - Currently BT TSO have 12 projects scheduled for upper Holloway.
TEWKESBURY - This is with BT TSO / Backhaul upgrades.
MONMOUTH - This is with BT TSO / Backhaul upgrades.
BEULAH HILL - Possibly fixed last night. Will monitor to see if any better this evening
BAYSWATER - Packet loss identified and reported to BT
Update
06 May 2014 11:44:59
TEWKESBURY - Fixed
CANONBURY - EU's being migrated on 7th May and 359 EU's on 8th May

Still seeing some lines with issues after the upgrade. Passed back to BT.
NINE ELMS
MONMOUTH
UPPER HOLLOWAY
BEULAH HILL
READING EARLEY
Update
09 May 2014 16:16:33
CANONBURY - NINE ELMS- Now fixed
UPPER HOLLOWAY - Have asked the team dealing for the latest update. Email sent today 9/05/2014
MONMOUTH - BT TSO are still chasing this.
BEULAH HILL - BT Tso chasing for a date on a PEW for work to be carried out.
BAYSWATER - BT TSO are still chasing this
READING EARLEY - Unbalanced LAG identified. Rebalancing will be completed out of hours. No ETA on this sorry.
Update
15 May 2014 10:47:22
UPPER HOLLOWAY - Now fixed
MONMOUTH - We have been advised that the target date for the capacity increase is the 22nd May.
BEULAH HILL - Escalated this to a Duty Manager asking if he can gain an update.
EARLEY - TSO advised Capacity team have replied and hope to get the new 10gig links into service this month. No further updates, so escalated to Duty Manager to try and ascertain a specific date in May 2014 when this will take place.
Update
21 May 2014 09:32:00
Reading Early / Monmouth - Now fixed
Bayswater - We have received a reply from the capacity management team, advising that to alleviate capacity issues, moves are taking place on May 23rd and May 28th.
Beulah Hill - Due to issues with cabling this has been delayed , we are currently awaiting a date that the cables can be run so that the integration team can bring this into service
Update
02 Jun 2014 15:15:55
Bayswater - Now fixed
Belauh Hill - To alleviate capacity issues, moves are taking place between June 2nd and June 6th.
Update
10 Jun 2014 12:16:52
Belauh Hill - Now fixed
AYR - Seeing congestion on many lines, which has been reported.
Update
19 Jun 2014 14:33:06
AYR - Is now fixed
Broadband Users Affected 1%
Started 09 Mar 2014 10:08:25 by AAISP Pro Active Monitoring Systems
Closed 19 Jun 2014 14:33:59

11 Jun 2014 15:08:59
Details
11 Jun 2014 15:12:53
It looks like one of our LNSs restarted. This will have affected a third of our broadband customers. Lines all reconnected straight away and customers should not see any further problems. The usage graphs from midnight until the restart will have been lost.
Broadband Users Affected 33%
Started 11 Jun 2014 15:05:00
Closed 11 Jun 2014 15:08:59

12 May 2014 08:55:06
Details
10 May 2014 15:52:02
At 15:33 all 20CN lines on Kingston RASs dropped. We are chasing BT now.
Update
10 May 2014 16:05:18
BT have raised an incident. Apparently issue has been caused by power issues at London Kingston.
Update
12 May 2014 08:55:29
This was fixed after power was restored and a remote reset was performed.
Started 10 May 2014 15:50:27 by AAISP Staff
Closed 12 May 2014 08:55:06
Cause BT

28 Apr 2014 13:37:28
Details
24 Apr 2014 14:23:02
Some TalkTalk connected lines dropped at around 14:14. They are reconnecting now though. We'll investigate and will update this post.
Update
24 Apr 2014 14:29:01
This looked like it was a wider TalkTalk problem as other ISPs were also affected.
Most lines are back online now though. We will investigate further.
Update
24 Apr 2014 14:40:50
TalkTalk have been contacted and a Reason for Outage has been requested.
Update
24 Apr 2014 15:02:33
TalkTalk have confirmed the outage on their status page: http://solutions.opal.co.uk/network-status-report.php?reportid=3893
Update
24 Apr 2014 16:24:24
Update from TalkTalk: 15:59 24/04/2014 Supplier has noticed a link flap between two exchanges which resulted in brief loss of service for some DSL customers. The traffic was reconverged over alternative links. Supplier is still investigating for the root cause.
Resolution Incident was due to a transmission failure which the supplier is investigating with the switch vendor. We've also had this update from TalkTalk: The cause was identified as a blown rectifier.
Started 24 Apr 2014 14:14:00
Closed 28 Apr 2014 13:37:28

02 May 2014 08:48:41
Details
02 May 2014 08:13:36
We did some work yesterday to try and ensure we are correctly tracking lines being up and down. If ever there is any problem with RADIUS accounting this can get out of step. It is meant to sort itself out automatically, but there seemed to be some cases where that was not quite right.

Unfortunately the change led to lots of up/down emails, texts, and tweeks over night.

We think we have managed to address that now, and will be monitoring during the day.

Resolution We believe this is all sorted now.
Started 01 May 2014 20:00:00
Closed 02 May 2014 08:48:41
Previously expected 02 May 2014 12:00:00

02 May 2014 08:48:46
Details
22 Mar 2014 07:36:41
We have started to see yet more congestion on BT lines last night. This looks again a bit like a link aggregation issue (where one leg of a multiple link trunk within BT is full). The patten is not as obvious this time. Looking at the history we can see that some of the affected lines have had slight loss in the evenings. We did not spot this with our tools because of the rather odd pattern. Obviously we are trying to get this sorted with BT, but we are pleased to confirm that BT are actually providing more data now that shows where each circuit will use network components within their network. We plan to integrate this soon so that we can correlate some of these newer congestion issues and point BT in the right direction more quickly.
Started 21 Mar 2014 18:00:00
Closed 02 May 2014 08:48:46

24 Apr 2014 13:36:17
Details
17 Feb 2014 20:13:09
We are seeing packet loss at peak times on some lines on the Crouch End exchange. It's a small number of customers, and it looks like a congested SVLAN. This has been reported to BT.
Update
18 Feb 2014 10:52:26
Initially BT were unable to see any problem, their monitoring was not showing any congestion and they wanted us to report individual line faults rather than this being dealt as a specific BT network problem. However we have spoken to another ISP who confirms the problem. BT have now opened an Incident and will be investigating.
Update
18 Feb 2014 11:12:47
We have passed all our circuit details and graphs to proactive to investigate.
Update
18 Feb 2014 16:31:17
TSO will investigate overnight
Update
20 Feb 2014 10:15:02
No updates from TSO, proactive are chasing.
Update
27 Feb 2014 13:24:38
There is still congestion, we are chasing BT again.
Update
28 Feb 2014 09:34:50
Appears the issue is on the MSE router. Lines connected to the MSE are due to be migrated on 21st March however BT are hoping to get this done by 21th March.
Update
24 Apr 2014 15:25:06
All lines on the Crouch End exchange are now showing clear.
Broadband Users Affected 0.10%
Started 17 Feb 2014 20:10:29
Closed 24 Apr 2014 13:36:17

04 Apr 2014 17:05:09
Details
08 Apr 2014 16:58:41
Some lines on the BT LEITH exchange have gone down. BT are aware and are investigating at the moment.
Started 08 Apr 2014 16:30:20 by Customer report
Closed 04 Apr 2014 17:05:09

03 Apr 2014 12:26:40
Details
25 Mar 2014 09:55:20

We are seeing customer routers being attacked this morning, which is causing them to drop. This was previously reported in the status post http://status.aa.net.uk/1877 where we saw that the attacks were affecting ZyXEL routers, as well as other makes.

Since that post we have updated the configuration of customer ZyXEL routers, where possible and these are no longer being affected. However, these attacks are affecting other types of routers.

We suggest that customers with lines that are dropping to check their router configuration and disable access to the router's web interface from the internet, or at least to change the the port used (eg one in the range of 1024-65535)

Please speak to Support for more information.

Update
28 Mar 2014 10:13:13
This is happening again, do speak to suport if you need help changing the web interface settings.
Customers with ZyXELs can change the port from the control pages.
Started 25 Mar 2014 09:00:40
Closed 03 Apr 2014 12:26:40

01 Apr 2014 10:00:00
Details
01 Apr 2014 12:13:31
Some TalkTalk connected lines dropped at around 09:50 and reconnected a few minutes after. It looks like a connectivity problem between us and TalkTalk on one of our connections to them. We are investigating further.
Started 01 Apr 2014 09:50:00
Closed 01 Apr 2014 10:00:00

31 Mar 2014 15:03:25
Details
31 Mar 2014 09:40:40
Some TalkTalk line diagnostics (Signal graphs and line tests) as available from the Control Pages are not working at the moment. This is being looked in to.
Update
31 Mar 2014 15:03:17
This is resolved. The TalkTalk side appears of have a bug relating to timezones.
Resolution This is resolved. The TalkTalk side appears of have a bug relating to timezones.
Started 31 Mar 2014 09:00:00
Closed 31 Mar 2014 15:03:25

20 Mar 2014 11:17:21
Details
20 Mar 2014 08:38:52
Customers will be seeing what looks like 'duplicated' usage reporting on the control for last night and this morning. This has been caused by a database migration that is taking longer than expected. The usage 'duplication' has been caused by usage reports being missed and so on subsequent hours the usage has been spread equally across missed hours.
This means that overall the usage reporting will be correct, but an individual hour will be incorrect.
This has also affected a few other related things such as the Line Colour states.
Update
20 Mar 2014 11:17:55
Usage reporting is now back to normal.
Started 19 Mar 2014 18:00:00
Closed 20 Mar 2014 11:17:21

02 Mar 2014 11:33:29
Details
01 Mar 2014 04:24:02
Lines: 100% 21CN-REGION-GI-B dropped at 2014-03-01 04:22:17
We have advised BT
This is likely to have affected multiple internet providers using BT
Update
01 Mar 2014 04:25:06
Lines: 100% 21CN-REGION-GI-B dropped again at 2014-03-01 04:23:21.
Broadband Users Affected 2%
Started 01 Mar 2014 04:22:17 by AAISP automated checking
Closed 02 Mar 2014 11:33:29
Cause BT

11 Mar 2014 09:32:42
Details
06 Mar 2014 13:07:51

We have had a small number of reports from customers who have had the DNS settings on their routers altered. The IPs we are seeing set are 199.223.215.157 and 199.223.212.99 (there may be others)

This type of attack is called Pharming. In short, it means that any internet traffic could be redirected to servers controlled by the attacker.

There is more information about pharming on the following pages:

At the moment we are logging when customers try to accesses these IP addresses and we are then contacting the customers to make them aware.

To solve the problem we are suggesting that customers replace the router or speak to their local IT support.

Update
06 Mar 2014 13:33:10
Changing the DNS settings back to auto, changing the administrator password and disabling WAN side access to the router may also prevent this from happening again.
Update
06 Mar 2014 13:48:14
Also reported here: http://www.pcworld.com/article/2104380/
Resolution We have contacted the few affected customers.
Started 06 Mar 2014 09:00:00
Closed 11 Mar 2014 09:32:42

07 Mar 2014 15:08:45
Details
07 Mar 2014 15:10:59
Some broadbands lined blipped at 15:05. This was a result of one of our LNSs restarting. Lines are back online and we'll investigate the cause
Started 07 Mar 2014 15:03:00
Closed 07 Mar 2014 15:08:45

27 Feb 2014 20:40:00
Details
27 Feb 2014 20:29:14
We are seeing some TT lines dropping and a routing problem.
Update
27 Feb 2014 20:39:20
Things are ok now, we're investigating. This looks to have affected some routing for broadband customers and caused some TT lines to drop.
Resolution We are not entirely sure what caused this, however we do believe it to be related to BGP flapping. This also looks to have affected other ISPs and networks too.
Started 27 Feb 2014 20:18:00
Closed 27 Feb 2014 20:40:00

16 Feb 2014 17:59:00
Details
16 Feb 2014 18:12:15
All lines reconnected right away as per normal backup systems, but graphs on the "B" LNS have lost history before the reset. The exact cause is not obvious yet, but at the same time there is yet another of these quite regular attacks on ZyXEL routers which adds to confusion. As advised on another status post there are changes to ZyXEL router config planned to address the issue.
Broadband Users Affected 33.33%
Started 16 Feb 2014 17:58:00
Closed 16 Feb 2014 17:59:00

24 Feb 2014 12:00:00
Details
11 Jan 2014 08:42:32
Since around 2am, as well as a short burst last night around 19:45, we have seen some issues with some lines. This appears to be specific to certain types of router being used on the lines. We are still investigating this.
Update
11 Jan 2014 10:53:53
At the moment, we have managed to identify at least some of the traffic and the affected routers and block it temporarily. We'll be able to provide some more specific advice on the issue and contact affected customers in due course.
Update
13 Jan 2014 14:07:56
We blocked a further IP this morning.
Update
15 Jan 2014 08:17:47
The issue is related to specific routers, and is affecting many ISPs. In our case it is almost entirely zyxel routers that are affected. It appears to be some sort of widespread and ongoing syn flood attack that is causing routers to crash and resulting in loss of sync. We are operating some source IP blocking temporarily to address these issues for the time being, and will have a simple button on our control pages to reconfigure zyxel routers for affected customers shortly.
Update
07 Feb 2014 10:24:07
Last night and this morning there was another flood of traffic causing ZyXELs to restart. We suggest changing the web port to something other than 80, details can be found here: http://wiki.aa.org.uk/Router_-_ZyXEL_P660R-D1#Closing_WAN_HTTP
Update
13 Feb 2014 10:44:41
We will be contacting ZyXEL customers by email over the next few days regarding these problems. Before that though, to verify our records of the router type, we will be performing a 'scan' of customer's WAN IP addresses. This scan will involve downloading the index page from the WAN address.
Update
20 Feb 2014 21:34:54
Customers with ZyXELs online have been contacted this week regarding this issue.
Update
24 Feb 2014 11:17:13
As per email to affected customers, we are updating the http port on ZyXEL routers today - Customers will be emailed as their router is updated.
Resolution Affected customers have been notified, tools in place on the Control Pages for customers to manage the http port and where appropriate ZyXEL routers have had their http port and WAN settings changed.
Broadband Users Affected 5%
Started 11 Jan 2014 02:00:00
Closed 24 Feb 2014 12:00:00

22 Feb 2014 08:00:00
Details
22 Feb 2014 07:56:22
There seems to have been something going on between 2am and 3am. We even had some incidents in BT, but whatever was going on managed to cause an unexpected restart of on of our LNS ("B") at just after 3am. So graphs before then are lost. At 7:55 lines that ended up on the "D" LNS were moved back to the "B" LNS causing a PPP restart.
Broadband Users Affected 33.33%
Started 22 Feb 2014 03:00:00
Closed 22 Feb 2014 08:00:00
Previously expected 22 Feb 2014 08:00:00

20 Feb 2014 18:18:00
Details
20 Feb 2014 09:20:19
We are seeing some lines unable to log in since a blip at 02:49. We are contacting BT. These lines are in sync, but PPP is failing. It looks like a number of BT RASs are affected, including 21CN-BRAS-RED9-GI-B and 21CN-BRAS-RED1-NT-B.
Update
20 Feb 2014 09:31:18
BT were already aware of the problem and are investigating.
Update
20 Feb 2014 12:23:12
These lines are still down, we are chasing BT.
Update
20 Feb 2014 13:21:20
BT believed this issue had been fixed. We have supplied them with all of our circuits that are down. This is being supplied to TSO and we should have an update in the next hour.
Update
20 Feb 2014 14:26:44
A new incident has been raised as BT thought the issue was fixed.
Update
20 Feb 2014 14:27:56
The issue is apparently still being diagnosed.
Update
20 Feb 2014 21:17:48
BT fixed this at 18:18 this evening.
Update
20 Feb 2014 21:34:04
BT say:
BT apologises for the problems experienced today by WMBC customers and are pleased to advise the issue has been fully resolved following the back out of a planned work completed overnight. BT is aware and understands the fault which occurred and have engaged vendor support to commence urgent investigations to identify the root cause.
The BT Technical Services teams have monitored the network since the corrective actions taken at 18:04 and have confirmed the network has remained stable.
Broadband Users Affected 0.20%
Started 20 Feb 2014 03:49:00
Closed 20 Feb 2014 18:18:00

20 Feb 2014 10:00:00
Details
20 Feb 2014 10:24:43
In addition to https://status.aa.net.uk/1891 there is a UK wide problem with lines logging in. This is affecting other ISPs, and affecting a small number of lines. BT are already aware.
Update
20 Feb 2014 11:07:55
BT are saying this is now fixed. We saw affected lines come back online just after 10am. BT say about half of the UK 21CN WBC lines were affected, however, we only saw a few dozen lines affected.
Started 20 Feb 2014 09:00:00
Closed 20 Feb 2014 10:00:00