Order posts by limited to posts

Yesterday 11:09:46
9 Dec 11:20:04
Some lines on the LOWER HOLLOWAY exchange are experiencing peak time packet loss. We have reported this to BT and they are investigating the issue.
11 Dec 10:46:42
BT have passed this to TSO for investigation. We are waiting for a further update.
12 Dec 14:23:56
BT's Tso are currently investigating the issue.
16 Dec 12:07:31
Other ISPs are seeing the same problem. The BT Capacity team are now looking in to this.
Wednesday 16:21:04
No update to report yet, we're still chasing BT...
Yesterday 11:09:46
The latest update from this morning is: "The BT capacity team have investigated and confirmed that the port is not being over utilized, tech services have been engaged and are currently investigating from their side."
Update expected Today 14:00:00
Expected close Today 15:14:17 (Estimated Resolution Time from AAISP)

12 Dec 11:00:40
11 Dec 10:42:15
We are seeing some TT connected lines with packetloss starting at 9AM yesterday and today. The loss lasts until 10AM and then there continues a low amount of loss. We have reported this to TalkTalk
11 Dec 10:46:34
This is the pattern of loss we are seeing:
12 Dec 12:00:04
No loss has been seen on these lines today. We're still chasing TT for any update though.
Resolution The problem went away... TT were unable to find the cause.
Broadband Users Affected 7%
Started 11 Dec 09:00:00
Closed 12 Dec 11:00:40

11 Dec 14:15:00
11 Dec 14:13:58
BT issue affecting SOHO AKA GERRARD STREET 21CN-ACC-ALN1-L-GER. we have reported to this BT and they are now investigating.
11 Dec 14:19:33
BT are investigating, however the circuits are mostly back online.
Started 11 Dec 13:42:11 by AAISP Pro Active Monitoring Systems
Closed 11 Dec 14:15:00
Previously expected 11 Dec 18:13:11 (Last Estimated Resolution Time from AAISP)

2 Dec 09:05:00
1 Dec 21:54:24
All FTTP circuits on Bradwell Abbey have packetloss. This started at about 23:45 on 30th November. This is affecting other ISPs too. BT did have an Incident open, but this has been closed. They restarted a line card last night, but it seems the problem has been since the card was restarted. We are chasing BT.
Example graph:
1 Dec 22:38:39
It has been a struggle to get the front line support and the Incident Desk at BT to accept that this is a problem. We have passed this on to our Account Manager and other contacts within BT in the hope of a speedy fix.
2 Dec 07:28:40
BT have tried doing something overnight, but the packetloss still exists at 7am 2nd December. Our monitoring shows:
  • packet loss it stops at 00:30
  • The lines go off between 04:20 and 06:00
  • The packet loss starts again at 6:00 when they come back onine.
We've passed this on to BT.
2 Dec 09:04:56
Since 7AM today, the lines have been OK... we will continue to monitor.
Started 30 Nov 23:45:00
Closed 2 Dec 09:05:00

3 Dec 09:44:00
27 Nov 16:31:03
We are seeing what looks like congestion on the Walworth exchange. Customers will be experiencing high latency, packetloss and slow throughput in the evenings and weekends. We have reported this to TalkTalk.
2 Dec 09:39:27
Talk Talk are still investigating this issue.
2 Dec 12:22:04
The congestion issue has been discovered on Walworth Exchange and Talk Talk are in the process of traffic balancing.
3 Dec 10:30:14
Capacity has been increased and the exchange is looking much better now.
Started 27 Nov 16:28:35
Closed 3 Dec 09:44:00

19 Nov 16:20:46
19 Nov 15:11:12
Lonap (one of the main Internet peering points in the UK) has a problem. We have stopped passing traffic over Lonap. Customers may have seen packetloss for a short while, but routing should be OK now. We are monitoring the traffic and will bring back Lonap when all is well.
19 Nov 16:21:29
The Lonap problem has been fixed, and we've re-enabled our peering.
Started 19 Nov 15:00:00
Closed 19 Nov 16:20:46

21 Nov 00:18:00
21 Nov 10:58:09
We have a number of TT lines down all on the same RAS: HOST-62-24-203-36-AS13285-NET. We are chasing this with TalkTalk.
21 Nov 11:01:29
Most lines are now back. We have informed TalkTalk.
21 Nov 12:18:22
TT have come back to us. They were aware of the problem, it was caused by a software problem on an LTS.
Started 21 Nov 10:45:00
Closed 21 Nov 00:18:00

25 Nov 10:43:46
21 Oct 14:10:19
We're seeing congestion from 10am up to 11:30pm across the BT Rose Street, PIMLICO and the High Wycombe exchange. A fault has been raised with BT and we will post updates as soon as we can. Thanks for your patience.
28 Oct 11:23:44
Rose Street and High Wycombe are now clear. Still investigating Pimlico
3 Nov 14:41:45
Pimlico has now been passed to BT's capacity team to deal with . Further capacity is needed and will be added asap. We will provide updates as soon as it's available.
5 Nov 10:12:30
We have just been informed by the BT capacity team that end users will be moved to a different VLAN on Friday morning. We will post futher updates when we have them.
11 Nov 10:23:59
Most of the Pimlico exchange is now fixed. Sorry for the delay.
19 Nov 11:01:57
There is further planned work on the Pimlico exchange for the 20th November. This should resolve the congestion on the Exchange.
25 Nov 10:44:43
Pimlico lines are now running as expected. Thanks for your patience.
Started 21 Oct 13:31:50
Closed 25 Nov 10:43:46

4 Nov 16:47:11
4 Nov 09:42:18
Several graphs have been missing in recent weeks, some days, and some LNSs. This is something we are working on. Unfortunately, today, one of the LNSs is not showing live graphs again, and so these will not be logged over night. We hope to have a fix for this in the next few days. Sorry for any inconvenience.
Resolution The underlying cause has been identified and will be deployed over the next few days.
Started 1 Oct
Closed 4 Nov 16:47:11
Previously expected 10 Nov

1 Nov 11:35:11
[Broadband] - Blip - Closed
1 Nov 11:55:38
There appears to be something of a small DoS attack which resulted in a blip around 11:29:16 today, and caused some issues with broadband lines and other services. We're looking in to this at present and graphs are not currently visible on one of the LNSs for customers.
1 Nov 13:09:44
We expect graphs on a.gormless to be back tomorrow morning after some planned work.
Resolution Being investigated further.
Started 1 Nov 11:29:16
Closed 1 Nov 11:35:11

29 Sep 22:37:36
21 Aug 12:50:32
Over the past week or so we have been missing data on some monitoring graphs, this is shown as purple for the first hour in the morning. This is being caused by delays in collecting the data. This is being looked in to.
Resolution We believe this has been fixed now. We have been monitoring it for a fortnight after making an initial fix, and it looks to have been successful.
Closed 29 Sep 22:37:36

20 Sep 07:09:09
20 Sep 11:59:13
RADIUS account is behind at the moment. This is causing the usage data to appear as missing from customer lines. The accounting is behind, but it's not broken, and is catching up. The usage data doesn't appear to be lost, and should appear later in the day.
21 Sep 08:12:52
Records have now caught up.
Closed 20 Sep 07:09:09
Previously expected 20 Sep 15:57:11

26 Aug 09:15:00
26 Aug 09:02:02
Yesterday's and today's line graphs are not being shown at the moment. We are working on restoring this.
26 Aug 09:42:18
Today's graphs are back, yesterdays are lost though.
Started 26 Aug 08:00:00
Closed 26 Aug 09:15:00

1 Sep 19:42:08
1 Sep 19:42:56
c.gormless rebooted, lines moved to other LNS automatically. We are investigating.
Broadband Users Affected 33%
Started 1 Sep 19:39:19
Closed 1 Sep 19:42:08

23 Apr 10:21:03
01 Nov 2013 15:05:00
We have identified an issue that appears to be affecting some customers with FTTC modems. The issue is stupidly complex, and we are still trying to pin down the exact details. The symptoms appear to be that some packets are not passing correctly, some of the time.

Unfortunately one of the types of packet that refuses to pass correctly are FireBrick FB105 tunnel packets. This means customers relying on FB105 tunnels over FTTC are seeing issues.

The work around is to remove the ethernet lead to the modem and then reconnect it. This seems to fix the issue, at least until the next PPP restart. If you have remote access to a FireBrick, e.g. via WAN IP, and need to do this you can change the Ethernet port settings to force it to re-negotiate, and this has the same effect - this only works if directly connected to the FTTC modem as the fix does need the modem Ethernet to restart.

We are asking BT about this, and we are currently assuming this is a firmware issue on the BT FTTC modems.

We have confirmed that modems re-flashed with non-BT firmware do not have the same problem, though we don't usually recommend doing this as it is a BT modem and part of the service.

04 Nov 2013 16:52:49
We have been working on getting more specific information regarding this, we hope to post an update tomorrow.
05 Nov 2013 09:34:14
We have reproduced this problem by sending UDP packets using 'Scapy'. We are doing further testing today, and hope to write up a more detailed report about what we are seeing and what we have tested.
05 Nov 2013 14:27:26
We have some quite good demonstrations of the problem now, and it looks like it will mess up most VPNs based on UDP. We can show how a whole range of UDP ports can be blacklisted by the modem somehow on the next PPP restart. It is crazy. We hope to post a little video of our testing shortly.
05 Nov 2013 15:08:16
Here is an update/overview of the situation. (from http://revk.www.me.uk/2013/11/bt-huawei-fttc-modem-bug-breaking-vpns.html )

We have confirmed that the latest code in the BT FTTC modems appears to have a serious bug that is affecting almost anyone running any sort of VPN over FTTC.

Existing modems seem to be upgrading, presumably due to a roll out of new code in BT. An older modem that has not been on-line a while is fine. A re-flashed modem with non-BT firmware is fine. A working modem on the line for a while suddenly stopped working, presumably upgraded.

The bug appears to be that the modem manages to "blacklist" some UDP packets after a PPP restart.

If we send a number of UDP packets, using various UDP ports, then cause PPP to drop and reconnect, we then find that around 254 combinations of UDP IP/ports are now blacklisted. I.e. they no longer get sent on the line. Other packets are fine.

Sending 500 different packets, around 254 of them will not work again after the PPP restart. It is not actually the first or last 254 packets, some in the middle, but it seems to be 254 combinations. They work as much as you like before the PPP restart, and then never work after it.

We can send a batch of packets, wait 5 minutes, PPP restart, and still find that packets are now blacklisted. We have tried a wide range of ports, high and low, different src and dst ports, and so on - they are all affected.

The only way to "fix" it, is to disconnect the Ethernet port on the modem and reconnect. This does not even have to be long enough to drop PPP. Then it is fine until the next PPP restart. And yes, we have been running a load of scripts to systematically test this and reproduce the fault.

The problem is that a lot of VPNs use UDP and use the same set of ports for all of the packets, so if that combination is blacklisted by the modem the VPN stops after a PPP restart. The only way to fix it is manual intervention.

The modem is meant to be an Ethernet bridge. It should not know anything about PPP restarting or UDP packets and ports. It makes no sense that it would do this. We have tested swapping working and broken modems back and forth. We have tested with a variety of different equipment doing PPPoE and IP behind the modem.

BT are working on this, but it is a serious concern that this is being rolled out.
12 Nov 2013 10:20:18
Work on this in still ongoing... We have tested this on a standard BT retail FTTC 'Infinity' line, and the problem cannot be reproduced. We suspect this is because when the PPP re-establishes a different IP address is allocated each time, and whatever is session tracking does not match the new connection.
12 Nov 2013 11:08:17

Here is an update with some a more specific explanation as to what the problem we are seeing is:

On WBC FTTC, we can send a UDP packet inside the PPP and then drop the PPP a few seconds later. After the PPP re-establishes, UDP packets with the same source and destination IP and ports won't pass; they do not reach the LNS at the ISP.

Further to that, it's not just one src+dst IP and port tuple which is affected. We can send 254 UDP packets using different src+dest ports before we drop the PPP. After it comes back up, all 254 port combinations will fail. It is worth noting here that this cannot be reproduced on an FTTC service which allocates a dynamic IP which changes each time PPP re-established.

If we send more than 254 packets, only 254 will be broken and the others will work. It's not always the first 254 or last 254, the broken ones move around between tests.

So it sounds like the modem (or, less likely, something in the cab or exchange) is creating state table entries for packets it is passing which tie them to a particular PPP session, and then failing to flush the table when the PPP goes down.

This is a little crazy in the first place. It's a modem. It shouldn't even be aware that it's passing PPPoE frames, let along looking inside them to see that they are UDP.

This only happens when using an Openreach Huawei HG612 modem that we suspect has been recently remotely and automatically upgraded by Openreach in the past couple of months. Further - a HG612 modem with the 'unlocked' firmware does not have this problem. A HG612 modem that has probably not been automatically/remotely upgraded does not have this problem.

Side note: One theory is that the brokenness is actually happening in the street cab and not the modem. And that the new firmware in the modem which is triggering it has enabled 'link-state forwarding' on the modem's Ethernet interface.

27 Nov 2013 10:09:42
This post has been a little quiet, but we are still working with BT/Openreach regarding this issue. We hope to have some more information to post in the next day or two.
27 Nov 2013 10:10:13
We have also had reports from someone outside of AAISP reproducing this problem.
27 Nov 2013 14:19:19
We have spent the morning with some nice chaps from Openreach and Huawei. We have demonstrated the problem and they were able to do traffic captures at various points on their side. Huawei HQ can now reproduce the problem and will investigate the problem further.
28 Nov 2013 10:39:36
Adrian has posted about this on his blog: http://revk.www.me.uk/2013/11/bt-huawei-working-with-us.html
13 Jan 14:09:08
We are still chasing this with BT.
3 Apr 15:47:59
We have seen this affect SIP registrations (which use 5060 as the source and target)... Customers can contact us and we'll arrange a modem swap.
23 Apr 10:21:03
BT are in the process of testing an updated firmware for the modems with customers. Any customers affected by this can contact us and we can arrange a new modem to be sent out.
Resolution BT are testing a fix in the lab and will deploy in due course, but this could take months. However, if any customers are adversely affected by this bug, please let us know and we can arrange for BT to send a replacement ECI modem instead of the Huawei modem. Thank you all for your patience.

BT do have a new firmware that they are rolling out to the modems. So far it does seem to have fixed the fault and we have not heard of any other issues as of yet. If you do still have the issue, please reboot your modem, if the problem remains, please contact support@aa.net.uk and we will try and get the firmware rolled out to you.
Started 25 Oct 2013
Closed 23 Apr 10:21:03

13 Aug 09:15:00
13 Aug 11:26:08
Due to a radius issue we were not receiving line statistics from just after midnight. As a result we needed to force lines to login again. This would have caused lines to lose their PPP connection and then reconnect at around 9AM. We apologise for this, and will be investigating the cause.
Started 13 Aug 09:00:00
Closed 13 Aug 09:15:00

8 Aug 15:25:00
8 Aug 15:42:28
At 15:15 we saw customer on the 'D' LNS's lose their connection and reconnect a few moments later. The cause of this is being looked in to.
Resolution Lines quickly came back online, we apologise for the drop though. The cause will be investigated.
Started 8 Aug 15:15:00
Closed 8 Aug 15:25:00

1 Aug 10:00:00
We saw what looks to be congestion on some lines on the Rugby exchange (BT lines). This shows a slight packet loss on Sunday evening. We'll report this to BT.
30 Jul 11:03:08
Card replaced early hours this morning, which should have fixed the congestion problems.
Started 27 Jul 21:00:00
Closed 1 Aug 10:00:00

28 Jul 11:00:00
28 Jul 09:20:03
Customers may have seen a drop and reconnect of their broadband lines this morning. Due to a problem with our RADIUS accounting on Sunday we have needed to restart our customer database server, Clueless. This has been done, and Clueless is back online. Due to the initial problem with RADIUS accounting most DSL lines have had to be restarted.
28 Jul 10:02:13
We are also sending out order update messages in error - eg, emails about orders that have already completed. We apologise for this confusing and are investigating this.
Started 28 Jul 09:00:00
Closed 28 Jul 11:00:00

17 Jul 17:45:00
17 Jul 16:23:15
We have a few reports from customers, and a vague Incident report from BT that looks like there may be PPP problem within the BT network which is affecting customers logging in to us. Customers may see their ADSL router in sync, but not able to log in (no PPP).
17 Jul 16:40:31
This looks to be affecting BT ADSL and FTTC circuits. A line which tries to log in may well fail.
17 Jul 16:42:34
Some lines are logging in successfully now.
17 Jul 16:54:15
Not all lines are back yet, but lines are still logging back in, so if you are still offline it may take a little more time.
Resolution This was a BT incident, reference IMT26151/14. This was closed by BT at 17:45 without giving us further details about what the problem was or what they did to restore service.
Started 17 Jul 16:00:00
Closed 17 Jul 17:45:00

11 Jul 11:03:55
11 Jul 17:00:48
The "B" LNS restarted today, unexpectedly. All lines reconnected within minutes (however fast the model retries). We'll clear some traffic off the "D" server back to the "B" server later this evening.
Resolution We're investigating the cause of this.
Broadband Users Affected 33%
Started 11 Jul 11:03:52
Closed 11 Jul 11:03:55

1 Jul 23:25:00
1 Jul 20:50:32
We have identified some TalkTalk back haul lines with congestion starting around 16:20 and now 100ms with 2% loss. This affects around 3% of our TT lines.

We have techies in TalkTalk on the case and hope to have it resolved soon.

1 Jul 20:56:19
"On call engineers are being scrambled now - we have an issue in the wider Oxford area and you should see an incident coming through shortly."
Resolution Engineers fixed the issue last night.
Started 1 Jul 16:20:00
Closed 1 Jul 23:25:00
Previously expected 2 Jul

19 Jun 14:33:59
11 Mar 10:11:55
We are seeing multiple exchanges with packet loss over BT wholesale. We are chasing BT on this and will update as and when we have updates. GOODMAYES CANONBURY HAINAULT SOUTHWARK LOUGHTON HARLOW NINE ELMS UPPER HOLLOWAY ABERDEEN DENBURN HAMPTON INGREBOURNE COVENTRY 21CN-BRAS-RED6-SF
14 Mar 12:49:28
This has now been escalated to the next level for further investigation.
17 Mar 15:42:38
BT are now raising faults on each Individual exchange.
21 Mar 10:19:24
Below are the exchanges/RAS which has been fixed by capacity upgrades. We are hoping for the remanding four exchanges to be fixed in the next few days.
21 Mar 15:52:45
COVENTRY should be resolved later this evening when a new link is installed between Nottingham and Derby. CANONBURY is waiting for CVLAN moves that begin 19/03/2014 and will be competed 01/04/2014.
25 Mar 10:09:23
CANONBURY - Planned Engineering works have taken place on 19.3.14, and there are three more planned 25.3.14 , 26.3.14 and 1.4.14.
COVENTRY - Is now fixed
NINE ELMS and UPPER HOLLOWAY- Still suffering from packet loss and BT are investigating further.
2 Apr 15:27:11
BT are still investigating congestion on Canonbury, Nine Elms and Upper Holloway.
23 Apr 11:45:44
CANONBURY - further PEW's on 7th and 8th May
NINE ELMS - A total of 384 EU’s have been migrated. A further 614 are planned to be migrated in the early hours of the 25/04/14.
UPPER HOLLOWAY - Planned Engineering Work on 28th April
BEULAH HILL and TEWKESBURY - Seeing congestion peak times and Chasing BT on this also.
30 Apr 12:51:24
NINE ELMS - T11003 - Still ongoing investigations for nine elms.
UPPER HOLLOWAY - T11004 - BT are working on this and a resolution should be available soon.
TEWKESBURY - T11200 - This is on the Backhaul list and will be dealt with shortly. Work request closed as no investigation required. BT are working on this and a resolution should be available soon.
MONMOUTH - T11182 - ALS583669 - This was balanced. I have advised BT that this is still not up to standards. They will continue to investigate. This is on the Backhaul Spreadhsheet also. So this is being investigate by capacity.
BEULAH HILL - Being investigated.
2 May 12:45:16
CANONBURY - 580 EU's being migrated on 7th May and 359 EU's on 8th May
NINE ELMS - Emergency Pew PW238650 that will take place in the early hours on the 02/05/14. This is to move 500 circuits off 4 ISPV's onto other IPSV's.
UPPER HOLLOWAY - Currently BT TSO have 12 projects scheduled for upper Holloway.
TEWKESBURY - This is with BT TSO / Backhaul upgrades.
MONMOUTH - This is with BT TSO / Backhaul upgrades.
BEULAH HILL - Possibly fixed last night. Will monitor to see if any better this evening
BAYSWATER - Packet loss identified and reported to BT
6 May 11:44:59
CANONBURY - EU's being migrated on 7th May and 359 EU's on 8th May

Still seeing some lines with issues after the upgrade. Passed back to BT.
9 May 16:16:33
UPPER HOLLOWAY - Have asked the team dealing for the latest update. Email sent today 9/05/2014
MONMOUTH - BT TSO are still chasing this.
BEULAH HILL - BT Tso chasing for a date on a PEW for work to be carried out.
BAYSWATER - BT TSO are still chasing this
READING EARLEY - Unbalanced LAG identified. Rebalancing will be completed out of hours. No ETA on this sorry.
15 May 10:47:22
MONMOUTH - We have been advised that the target date for the capacity increase is the 22nd May.
BEULAH HILL - Escalated this to a Duty Manager asking if he can gain an update.
EARLEY - TSO advised Capacity team have replied and hope to get the new 10gig links into service this month. No further updates, so escalated to Duty Manager to try and ascertain a specific date in May 2014 when this will take place.
21 May 09:32:00
Reading Early / Monmouth - Now fixed
Bayswater - We have received a reply from the capacity management team, advising that to alleviate capacity issues, moves are taking place on May 23rd and May 28th.
Beulah Hill - Due to issues with cabling this has been delayed , we are currently awaiting a date that the cables can be run so that the integration team can bring this into service
2 Jun 15:15:55
Bayswater - Now fixed
Belauh Hill - To alleviate capacity issues, moves are taking place between June 2nd and June 6th.
10 Jun 12:16:52
Belauh Hill - Now fixed
AYR - Seeing congestion on many lines, which has been reported.
19 Jun 14:33:06
AYR - Is now fixed
Broadband Users Affected 1%
Started 9 Mar 10:08:25 by AAISP Pro Active Monitoring Systems
Closed 19 Jun 14:33:59

11 Jun 15:08:59
11 Jun 15:12:53
It looks like one of our LNSs restarted. This will have affected a third of our broadband customers. Lines all reconnected straight away and customers should not see any further problems. The usage graphs from midnight until the restart will have been lost.
Broadband Users Affected 33%
Started 11 Jun 15:05:00
Closed 11 Jun 15:08:59

12 May 08:55:06
10 May 15:52:02
At 15:33 all 20CN lines on Kingston RASs dropped. We are chasing BT now.
10 May 16:05:18
BT have raised an incident. Apparently issue has been caused by power issues at London Kingston.
12 May 08:55:29
This was fixed after power was restored and a remote reset was performed.
Started 10 May 15:50:27 by AAISP Staff
Closed 12 May 08:55:06
Cause BT

28 Apr 13:37:28
24 Apr 14:23:02
Some TalkTalk connected lines dropped at around 14:14. They are reconnecting now though. We'll investigate and will update this post.
24 Apr 14:29:01
This looked like it was a wider TalkTalk problem as other ISPs were also affected.
Most lines are back online now though. We will investigate further.
24 Apr 14:40:50
TalkTalk have been contacted and a Reason for Outage has been requested.
24 Apr 15:02:33
TalkTalk have confirmed the outage on their status page: http://solutions.opal.co.uk/network-status-report.php?reportid=3893
24 Apr 16:24:24
Update from TalkTalk: 15:59 24/04/2014 Supplier has noticed a link flap between two exchanges which resulted in brief loss of service for some DSL customers. The traffic was reconverged over alternative links. Supplier is still investigating for the root cause.
Resolution Incident was due to a transmission failure which the supplier is investigating with the switch vendor. We've also had this update from TalkTalk: The cause was identified as a blown rectifier.
Started 24 Apr 14:14:00
Closed 28 Apr 13:37:28

2 May 08:48:41
2 May 08:13:36
We did some work yesterday to try and ensure we are correctly tracking lines being up and down. If ever there is any problem with RADIUS accounting this can get out of step. It is meant to sort itself out automatically, but there seemed to be some cases where that was not quite right.

Unfortunately the change led to lots of up/down emails, texts, and tweeks over night.

We think we have managed to address that now, and will be monitoring during the day.

Resolution We believe this is all sorted now.
Started 1 May 20:00:00
Closed 2 May 08:48:41
Previously expected 2 May 12:00:00

2 May 08:48:46
22 Mar 07:36:41
We have started to see yet more congestion on BT lines last night. This looks again a bit like a link aggregation issue (where one leg of a multiple link trunk within BT is full). The patten is not as obvious this time. Looking at the history we can see that some of the affected lines have had slight loss in the evenings. We did not spot this with our tools because of the rather odd pattern. Obviously we are trying to get this sorted with BT, but we are pleased to confirm that BT are actually providing more data now that shows where each circuit will use network components within their network. We plan to integrate this soon so that we can correlate some of these newer congestion issues and point BT in the right direction more quickly.
Started 21 Mar 18:00:00
Closed 2 May 08:48:46

24 Apr 13:36:17
17 Feb 20:13:09
We are seeing packet loss at peak times on some lines on the Crouch End exchange. It's a small number of customers, and it looks like a congested SVLAN. This has been reported to BT.
18 Feb 10:52:26
Initially BT were unable to see any problem, their monitoring was not showing any congestion and they wanted us to report individual line faults rather than this being dealt as a specific BT network problem. However we have spoken to another ISP who confirms the problem. BT have now opened an Incident and will be investigating.
18 Feb 11:12:47
We have passed all our circuit details and graphs to proactive to investigate.
18 Feb 16:31:17
TSO will investigate overnight
20 Feb 10:15:02
No updates from TSO, proactive are chasing.
27 Feb 13:24:38
There is still congestion, we are chasing BT again.
28 Feb 09:34:50
Appears the issue is on the MSE router. Lines connected to the MSE are due to be migrated on 21st March however BT are hoping to get this done by 21th March.
24 Apr 15:25:06
All lines on the Crouch End exchange are now showing clear.
Broadband Users Affected 0.10%
Started 17 Feb 20:10:29
Closed 24 Apr 13:36:17

4 Apr 17:05:09
8 Apr 16:58:41
Some lines on the BT LEITH exchange have gone down. BT are aware and are investigating at the moment.
Started 8 Apr 16:30:20 by Customer report
Closed 4 Apr 17:05:09

3 Apr 12:26:40
25 Mar 09:55:20

We are seeing customer routers being attacked this morning, which is causing them to drop. This was previously reported in the status post http://status.aa.net.uk/1877 where we saw that the attacks were affecting ZyXEL routers, as well as other makes.

Since that post we have updated the configuration of customer ZyXEL routers, where possible and these are no longer being affected. However, these attacks are affecting other types of routers.

We suggest that customers with lines that are dropping to check their router configuration and disable access to the router's web interface from the internet, or at least to change the the port used (eg one in the range of 1024-65535)

Please speak to Support for more information.

28 Mar 10:13:13
This is happening again, do speak to suport if you need help changing the web interface settings.
Customers with ZyXELs can change the port from the control pages.
Started 25 Mar 09:00:40
Closed 3 Apr 12:26:40

1 Apr 10:00:00
1 Apr 12:13:31
Some TalkTalk connected lines dropped at around 09:50 and reconnected a few minutes after. It looks like a connectivity problem between us and TalkTalk on one of our connections to them. We are investigating further.
Started 1 Apr 09:50:00
Closed 1 Apr 10:00:00

31 Mar 15:03:25
31 Mar 09:40:40
Some TalkTalk line diagnostics (Signal graphs and line tests) as available from the Control Pages are not working at the moment. This is being looked in to.
31 Mar 15:03:17
This is resolved. The TalkTalk side appears of have a bug relating to timezones.
Resolution This is resolved. The TalkTalk side appears of have a bug relating to timezones.
Started 31 Mar 09:00:00
Closed 31 Mar 15:03:25

20 Mar 11:17:21
20 Mar 08:38:52
Customers will be seeing what looks like 'duplicated' usage reporting on the control for last night and this morning. This has been caused by a database migration that is taking longer than expected. The usage 'duplication' has been caused by usage reports being missed and so on subsequent hours the usage has been spread equally across missed hours.
This means that overall the usage reporting will be correct, but an individual hour will be incorrect.
This has also affected a few other related things such as the Line Colour states.
20 Mar 11:17:55
Usage reporting is now back to normal.
Started 19 Mar 18:00:00
Closed 20 Mar 11:17:21

2 Mar 11:33:29
1 Mar 04:24:02
Lines: 100% 21CN-REGION-GI-B dropped at 2014-03-01 04:22:17
We have advised BT
This is likely to have affected multiple internet providers using BT
1 Mar 04:25:06
Lines: 100% 21CN-REGION-GI-B dropped again at 2014-03-01 04:23:21.
Broadband Users Affected 2%
Started 1 Mar 04:22:17 by AAISP automated checking
Closed 2 Mar 11:33:29
Cause BT

11 Mar 09:32:42
6 Mar 13:07:51

We have had a small number of reports from customers who have had the DNS settings on their routers altered. The IPs we are seeing set are and (there may be others)

This type of attack is called Pharming. In short, it means that any internet traffic could be redirected to servers controlled by the attacker.

There is more information about pharming on the following pages:

At the moment we are logging when customers try to accesses these IP addresses and we are then contacting the customers to make them aware.

To solve the problem we are suggesting that customers replace the router or speak to their local IT support.

6 Mar 13:33:10
Changing the DNS settings back to auto, changing the administrator password and disabling WAN side access to the router may also prevent this from happening again.
6 Mar 13:48:14
Also reported here: http://www.pcworld.com/article/2104380/
Resolution We have contacted the few affected customers.
Started 6 Mar 09:00:00
Closed 11 Mar 09:32:42

7 Mar 15:08:45
7 Mar 15:10:59
Some broadbands lined blipped at 15:05. This was a result of one of our LNSs restarting. Lines are back online and we'll investigate the cause
Started 7 Mar 15:03:00
Closed 7 Mar 15:08:45

27 Feb 20:40:00
27 Feb 20:29:14
We are seeing some TT lines dropping and a routing problem.
27 Feb 20:39:20
Things are ok now, we're investigating. This looks to have affected some routing for broadband customers and caused some TT lines to drop.
Resolution We are not entirely sure what caused this, however we do believe it to be related to BGP flapping. This also looks to have affected other ISPs and networks too.
Started 27 Feb 20:18:00
Closed 27 Feb 20:40:00

16 Feb 17:59:00
16 Feb 18:12:15
All lines reconnected right away as per normal backup systems, but graphs on the "B" LNS have lost history before the reset. The exact cause is not obvious yet, but at the same time there is yet another of these quite regular attacks on ZyXEL routers which adds to confusion. As advised on another status post there are changes to ZyXEL router config planned to address the issue.
Broadband Users Affected 33.33%
Started 16 Feb 17:58:00
Closed 16 Feb 17:59:00

24 Feb 12:00:00
11 Jan 08:42:32
Since around 2am, as well as a short burst last night around 19:45, we have seen some issues with some lines. This appears to be specific to certain types of router being used on the lines. We are still investigating this.
11 Jan 10:53:53
At the moment, we have managed to identify at least some of the traffic and the affected routers and block it temporarily. We'll be able to provide some more specific advice on the issue and contact affected customers in due course.
13 Jan 14:07:56
We blocked a further IP this morning.
15 Jan 08:17:47
The issue is related to specific routers, and is affecting many ISPs. In our case it is almost entirely zyxel routers that are affected. It appears to be some sort of widespread and ongoing syn flood attack that is causing routers to crash and resulting in loss of sync. We are operating some source IP blocking temporarily to address these issues for the time being, and will have a simple button on our control pages to reconfigure zyxel routers for affected customers shortly.
7 Feb 10:24:07
Last night and this morning there was another flood of traffic causing ZyXELs to restart. We suggest changing the web port to something other than 80, details can be found here: http://wiki.aa.org.uk/Router_-_ZyXEL_P660R-D1#Closing_WAN_HTTP
13 Feb 10:44:41
We will be contacting ZyXEL customers by email over the next few days regarding these problems. Before that though, to verify our records of the router type, we will be performing a 'scan' of customer's WAN IP addresses. This scan will involve downloading the index page from the WAN address.
20 Feb 21:34:54
Customers with ZyXELs online have been contacted this week regarding this issue.
24 Feb 11:17:13
As per email to affected customers, we are updating the http port on ZyXEL routers today - Customers will be emailed as their router is updated.
Resolution Affected customers have been notified, tools in place on the Control Pages for customers to manage the http port and where appropriate ZyXEL routers have had their http port and WAN settings changed.
Broadband Users Affected 5%
Started 11 Jan 02:00:00
Closed 24 Feb 12:00:00

22 Feb 08:00:00
22 Feb 07:56:22
There seems to have been something going on between 2am and 3am. We even had some incidents in BT, but whatever was going on managed to cause an unexpected restart of on of our LNS ("B") at just after 3am. So graphs before then are lost. At 7:55 lines that ended up on the "D" LNS were moved back to the "B" LNS causing a PPP restart.
Broadband Users Affected 33.33%
Started 22 Feb 03:00:00
Closed 22 Feb 08:00:00
Previously expected 22 Feb 08:00:00

20 Feb 18:18:00
20 Feb 09:20:19
We are seeing some lines unable to log in since a blip at 02:49. We are contacting BT. These lines are in sync, but PPP is failing. It looks like a number of BT RASs are affected, including 21CN-BRAS-RED9-GI-B and 21CN-BRAS-RED1-NT-B.
20 Feb 09:31:18
BT were already aware of the problem and are investigating.
20 Feb 12:23:12
These lines are still down, we are chasing BT.
20 Feb 13:21:20
BT believed this issue had been fixed. We have supplied them with all of our circuits that are down. This is being supplied to TSO and we should have an update in the next hour.
20 Feb 14:26:44
A new incident has been raised as BT thought the issue was fixed.
20 Feb 14:27:56
The issue is apparently still being diagnosed.
20 Feb 21:17:48
BT fixed this at 18:18 this evening.
20 Feb 21:34:04
BT say:
BT apologises for the problems experienced today by WMBC customers and are pleased to advise the issue has been fully resolved following the back out of a planned work completed overnight. BT is aware and understands the fault which occurred and have engaged vendor support to commence urgent investigations to identify the root cause.
The BT Technical Services teams have monitored the network since the corrective actions taken at 18:04 and have confirmed the network has remained stable.
Broadband Users Affected 0.20%
Started 20 Feb 03:49:00
Closed 20 Feb 18:18:00

20 Feb 10:00:00
20 Feb 10:24:43
In addition to https://status.aa.net.uk/1891 there is a UK wide problem with lines logging in. This is affecting other ISPs, and affecting a small number of lines. BT are already aware.
20 Feb 11:07:55
BT are saying this is now fixed. We saw affected lines come back online just after 10am. BT say about half of the UK 21CN WBC lines were affected, however, we only saw a few dozen lines affected.
Started 20 Feb 09:00:00
Closed 20 Feb 10:00:00

1 Feb 09:00:00
1 Feb 03:38:03
Lines: 100% 21CN-REGION-PR dropped at 2014-02-01 03:36:28
We have advised BT
This is likely to have affected multiple internet providers using BT
Broadband Users Affected 1%
Started 1 Feb 03:36:28 by AAISP automated checking
Closed 1 Feb 09:00:00
Cause BT

6 Feb 10:00:00
6 Feb 02:07:02
Lines: 100% 21CN-REGION-DY dropped at 2014-02-06 02:05:49
We have advised BT
This is likely to have affected multiple internet providers using BT
Broadband Users Affected 1%
Started 6 Feb 02:05:49 by AAISP automated checking
Closed 6 Feb 10:00:00
Cause BT

11 Feb 22:27:34
3 Feb 16:19:38

We have a fault open with BT regarding the Harvington Exchange. We are seeing packet loss, typically between 8am and 2am, and getting up to 20% at peak times in the evening.

BT have already tried resetting the line card, but this has not worked.

BT are still investigating.

3 Feb 16:22:45
Example graph:
3 Feb 20:34:34
This has been escalated within BT. Other ISPs are seeing a similar issue. Currently, BT's 'Technical Services' are investigating the problem.
5 Feb 10:16:58

BT have worked at the exchange early hours of this morning to try and resolve the issue. We will have to wait until around 3pm today to see if the heavy packet loss has been fixed.

The details from BT are as follows: "The technical team have worked all night on this issue. An engineer was sent to the exchange in the early hours of this morning and has reseated several IML cables in the network to see if this alleviates the issue. Ping testing has been carried out extensively since the reseats and where there was small packet loss seen prior to the reseat these are now proving to be totally clear."

5 Feb 16:43:59
Looks like the amount of loss is increasing. BT are still investigating.
6 Feb 11:30:34
From BT: Will get this info back over now and ensure tech services are involved to get to the bottom of this issue as agree this is really frustrating that we cannot find the route cause here
7 Feb 09:06:05
Chasing BT for an update
7 Feb 09:56:20
The controller card was reset rather than changed at 02:56am this morning and TSO are waiting on confirmation now if this has made a difference.
10 Feb 09:22:15
BT's efforts over the weekend has not fixed the problem. We will be chasing BT again.
10 Feb 10:01:42
BT are looking to see if it is possible to move our affected lines on to a different SVLAN. (in short, a different link out of the Exchange). We'll update this post when we get an update from BT.
10 Feb 17:00:06
BT are planning to move the lines to a different SVLAN, we're not sure when this will be done yet though. We'll update again when we have further information.
11 Feb 22:28:06
BT have moved the lines to a different SVLAN, and the packet loss problem has gone away.
Started 21 Jan 09:00:00
Closed 11 Feb 22:27:34

4 Feb 16:00:00
3 Feb 14:45:01

One of our authoritive DNS servers, secondary-dns.co.uk, has died. We are carrying out an emergency migration on to new hardware.

DNS services are still running, albeit only on the Primary Name server. This time period is considered 'at risk'.

The new server should be up and running in the next hour or so.

3 Feb 22:09:46
The replacement server is serving zones correctly, but customers who use secondary-dns.co.uk as a secondary to their own nameserver may not have their zones served yet. The reason is that the backup of zone files that we slave for customers is out of date. Rather than serve potentially incorrect records, we're simply not serving those zones but this means customers who use secondary-dns.co.uk as a secondary to their own nameserver should send it a notify to trigger a zone transfer. This does not affect domains for which we provide primary DNS.
Started 3 Feb 14:40:41
Closed 4 Feb 16:00:00

21 Jan 12:59:59
21 Jan 09:44:47
As of 8:30 this morning most 20CN lines connected to a Sheffield RAS have either been dropping or showing packet loss. We are reporting this to BT at the moment.
21 Jan 10:45:51
BT Engineer on the way to site.
21 Jan 11:07:59
Example graph:
21 Jan 12:58:09
Lines are looking better now.
21 Jan 12:58:22
Lines are looking stable again. No news yet from BT.
21 Jan 13:00:23
BT have reset a card, to resolve the issue.
Resolution Card reset.
Started 21 Jan 09:43:03 by AAISP Pro Active Monitoring Systems
Closed 21 Jan 12:59:59

8 Jan 10:20:00
8 Jan 10:13:35
We are seeing a small number of lines flapping (dropping and reconnecting).
We are investigating.
8 Jan 10:30:20
The dropping has stopped for the moment, lines are back to normal.
Started 8 Jan 09:49:00
Closed 8 Jan 10:20:00

27 Dec 2013 19:50:00
27 Dec 2013 14:58:36
We're seeing some issues to some of BTs BRASs. It looks like most of those in Slough (the BRASs ending -SL).
27 Dec 2013 16:11:03
BT are investigating.
27 Dec 2013 17:46:20
Graphs look like this:
27 Dec 2013 18:18:33
BT have an incident open for this fault, and are investigating.
27 Dec 2013 21:10:13
Lines cleared up at about 19:50. Looking back to normal now. No word from BT yet though.
27 Dec 2013 21:45:39
Lines now looking normal:
Broadband Users Affected 5%
Started 27 Dec 2013 14:38:00
Closed 27 Dec 2013 19:50:00