Order posts by limited to posts

6 Apr 01:00:00
Details
Yesterday 11:19:03
Due to maintenance being performed by a carrier, disruption of up to 15 minutes to 3G/4G data services is expected some time between 01:00 and 03:00 on the 6th of April.
Planned start 6 Apr 01:00:00
Expected close 6 Apr 03:00:00

9 Mar 20:00:00
Details
8 Mar 12:29:14

We continue to work with TalkTalk to get to the bottom of the slow throughput issue as described on https://aastatus.net/2358

We will be performing some routing changes and tests this afternoon and this evening, we are not expecting this to cause any drops for customers, but this evening there will be times when throughput for 'single thread' downloads will be slow. Sorry for the short notice, please bear with us, this is being a tricky fault to track down.

Update
8 Mar 22:39:39
Sorry, due to TalkTalk needing extra time to prepare for their changes this work has been moved to Thursday 9th evening.
Started 9 Mar 20:00:00
Update was expected 9 Mar 23:00:00

2 Mar 11:14:48
Details
7 Feb 14:32:32

We are seeing issues with IPv6 on a few VDSL cabinets serving our customers. There is no apparent geographical commonality amongst these, as far as we can tell.

Lines pass IPv4 fine, but only intermittently passing IPv6 TCP/UDP for brief amounts of time, usually 4 or so packets, before breaking. Customers have tried BT modem, Asus modem, and our supplied ZyXEL as a modem and router, no difference on any. We also lent them a FireBrick to do some traffic dumps.

Traffic captures at our end and the customer end show that the IPv6 TCP and UDP packets are leaving us but not reaching the customer. ICMP (eg pings) do work.

The first case was reported to us in August 2016, and it has taken a while to get to this point. Until very recently there was only a single reported case. Now that we have four cases we have a bit more information and are able to look at commonalities between them.

Of these circuits, two are serving customers via TalkTalk and two are serving customers via BT backhaul. So this isn't a "carrier network issue", as far as we can make out. The only thing that we can find that is common is that the cabinets are all ECI. (Actually - one of the BT connected customers has migrated to TalkTalk backhaul (still with us, using the same cabinet and phone line etc) and the IPv6 bug has also moved to the new circuit via TalkTalk as the backhaul provider)

We are working with senior TalkTalk engineers to try to perform a traffic capture at the exchange - at the point the traffic leaves TalkTalk equipment and is passed on to Openreach - this will show if the packets are making it that far and will help in pinning down the point at which packets are being lost. Understandably this requires TalkTalk engineers working out of hours to perform this traffic capture and we're currently waiting for when this will happen.

Update
2 Mar 11:14:48
Packet captures on an affected circuit carried out by TalkTalk have confirmed that this issue most likely lies in the Openreach network. Circuits that we have been made aware of are being pursued with both BT and TalkTalk for Openreach to make further investigations into the issue.
If you believe you may be affected please do contact support.
Update
17 Mar 09:44:00
Having had TalkTalk capture the traffic in the exchange, the next step is to capture traffic at the road-side cabinet. This is being progresses with Openreach and we hope this to happen 'soon'.
Update
Wednesday 09:52:52
We've received an update from BT advising that they have been able to replicate the missing IPv6 packets, this is believed to be a bug which they are pursuing with the vendor.

In the mean time they have also identified a fix which they are working to deploy. We're currently awaiting further details regarding this, and will update this post once further details become known.
Broadband Users Affected 0.05%
Started 7 Feb 09:00:00 by AA Staff

8 Feb
Details
8 Feb 15:30:07
We are expecting more router upgrades later in the month - this should address a couple of issues we have seen the last few weeks. These should have little or no disruption. Routers are normally done in evening or over night. LNSs are done as a rolling change moving lines to a new LNS over night for a series of nights. Do note that you can configure a preferred time of night on our control pages. Exact dates for upgrades not determined yet but should all be done by end of Feb.
Update
14 Feb 17:53:22
We will be doing some of the routers this evening, and starting the rolling LNS upgrades tonight.
Update
15 Feb 17:03:39
We will be doing the rest of the routers this evening, and continuing the LNS roll over which will run for several more days (we have quite a few LNSs now).
Update
27 Feb 10:02:24
We are expecting to do further upgrades at the start of March.
Update
1 Mar 16:35:50
We are starting a rolling LNS upgrade tonight, and doing some BGP router upgrades this evening.
Started 8 Feb
Expected close Tomorrow

7 Feb 14:49:55
[DNS, Email and Web Hosting] - SMTP Settings Change - Open
Details
7 Feb 14:52:42
For historical reasons, our SMTP servers allow sending authenticated email without TLS. This is insecure, and doesn't belong on the modern internet as it is possible for the username and password to be intercepted by a third party. We will no longer allow this as of the 4th of July. We are emailing customers who seem to be using insecure settings to warn them about the change. We have a support site page about what settings to change here: https://support.aa.net.uk/Enable_TLS_on_smtp.aa.net.uk Please contact support if you have any questions.
Started 7 Feb 14:49:55
Expected close 4 Jul 09:00:00

20 Sep 2016 11:26:07
Details
20 Sep 2016 11:31:39
Our upstream provider have advised us that they will be carrying out firmware upgrades on core 3G infrastructure on 23rd September 2016 between 00:10 and 04:30 BST.
During this period data SIMs may briefly disconnect as sessions are migrated to other nodes in the upstream provider's network to facilitate the upgrades.
VoIP and SIMs Users Affected 25%
Started 20 Sep 2016 11:26:07 by Carrier
Update was expected 23 Sep 2016 11:30:00
Previously expected 23 Sep 2016 04:30:00 (Last Estimated Resolution Time from Carrier)

01 Sep 2016 17:18:26
Details
01 Sep 2016 17:26:51
Our SMS gateway has always supported HTTPS but it still allows using HTTP only. As using HTTP only is probably a mistake these days we are going to change our gateway to redirect all HTTP requests to HTTPS next week. We don't expect anything to break as curl, etc., should "just work" but we're posting a PEW just in case anyone has a legacy script they think might need adjusting!
Update
09 Sep 2016 10:06:35
Unfortunately we've had to back out of making this change this week as it caused some unforeseen problems! We'll resolve those problems and make the change again soon.
Started 01 Sep 2016 17:18:26
Previously expected 07 Sep 2016 17:18:26

20 Jul 2016 09:58:43
[Maidenhead Colocation] - Web and email outage - Open
Details
04 Apr 2013 15:47:25

This is ongoing. We're investigating.

Update
04 Apr 2013 16:07:41

This should now be fixed. Please let support know if you see any problems, or have any questions.

Update
05 Apr 2013 09:15:04

This was resolved yesterday afternoon.

Update
20 Jul 2016 09:58:43
This may be related to a wider issue on the internet with a power outage at a major london data centre. Thos routing problems are still ongoing.
Started 04 Apr 2013 15:46:11

13 Apr 2015
Details
09 Apr 2015 13:43:39

We have been advised by Three of works on their network which will involve rerouting traffic from one of their nodes via alternate paths in their core. Although connections should automatically reroute, there will be brief amounts of packet loss. As a result, some customers may experience dropped connections. Any device reconnecting will automatically be routed via a new path.

This only affects our data only SIMs.

Started 13 Apr 2015

08 Jan 2015 12:39:06
Details
08 Jan 2015 12:49:24
We're going to remove the legacy fb6000.cgi page that was originally used to display CQM graphs on the control pages. This does not affect people who use the control pages as normal, but we've noticed that fb6000.cgi URLs are still being accessed occasionally. This is probably because the old page is being used to embed graphs into people's intranet sites, for example, but accessing graph URLs via fb6000.cgi has been deprecated for a long time. The supported method for obtaining graphs via automated means is via the "info" command on our API: http://aa.net.uk/support-chaos.html This is likely to affect only a handful of customers but, if you believe you're affected and require help with accessing the API, please contact support. We will remove the old page after a week (on 2015-01-15).
Update
09 Jan 2015 08:52:28
We'll be coming up with some working examples of using our CHAOS API to get graphs, we'll post an update here today or monday.
Update
12 Jan 2015 16:19:58
We have an example here: https://wiki.aa.net.uk/CHAOS
Started 08 Jan 2015 12:39:06 by AA Staff
Previously expected 15 Jan 2015 17:00:00

03 Jun 2014 17:00:00
Details
03 Jun 2014 18:20:39
The router upgrades went well, and now there is a new factory release we'll be doing some rolling upgrades over the next few days. Should be minimal disruption.
Update
03 Jun 2014 18:47:21
First batch of updates done.
Started 03 Jun 2014 17:00:00
Previously expected 07 Jun 2014

14 Apr 2014
Details
13 Apr 2014 17:29:53
We handle SMS, both outgoing from customers, and incoming via various carriers, and we are now linking in once again to SMS with mobile voice SIM cards. The original code for this is getting a tad worn out, so we are working on a new system. It will have ingress gateways for the various ways SMS can arrive at us, core SMS routing, and then output gateways for the ways we can send on SMS. The plan is to convert all SMS to/from standard GSM 03.40 TPDUs. This is a tad technical I know, but it will mean that we have a common format internally. This will not be easy as there are a lot of character set conversion issues, and multiple TPDUs where concatenation of texts is used. The upshot for us is a more consistent and maintainable platform. The benefit for customers is more ways to submit and receive text messages, including using 17094009 to make an ETSI in-band modem text call from suitable equipment (we think gigasets do this). It also means customers will be able to send/receive texts in a raw GSM 03.40 TPDU format, which will be of use to some customers. It also makes it easier for us to add other formats later. There will be some changes to the existing interfaces over time, but we want to keep these to a minimum, obviously.
Update
21 Apr 2014 16:27:23

Work is going well on this, and we hope to switch Mobile Originated texting (i.e. texts from the SIP2SIM) over to the new system this week. If that goes to plan we can move some of the other ingress texting over to the new system one by one.

We'll be updating documentation at the same time.

The new system should be a lot more maintainable. We have a number of open tickets with the mobile carrier and other operators to try and improve the functionality of texting to/from us. These cover things like correct handling of multi-part texts, and correct character set coding.

The plan is ultimately to have full UTF-8 unicode support on all texts, but that could take a while. It seems telcos like to mess with things rather than giving us a clean GSM TPDU for texts. All good fun.

Update
22 Apr 2014 08:51:09
We have updated the web site documentation on this to the new system, but this is not fully in use yet. Hopefully this week we have it all switched over. Right now we have removed some features from documenation (such as delivery reports), but we plan to have these re-instated soon once we have the new system handling them sensibly.
Update
22 Apr 2014 09:50:44
MO texts from SIP2SIM are now using the new system - please let support know of any issues.
Update
22 Apr 2014 12:32:07
Texts from Three are now working to ALL of our 01, 02, and 03 numbers. These are delivered by email, http, or direct to SIP2SIM depending on the configuration on our control pages.
Update
23 Apr 2014 09:23:20
We have switched over one of our incoming SMS gateways to the new system now. So most messages coming from outside will use this. Any issues, please let support know ASAP.
Update
25 Apr 2014 10:29:50
We are currently running all SMS via the new platform - we expect there to be more work still to be done, but it should be operating as per the current documentation now. Please let support know of any issues.
Update
26 Apr 2014 13:27:37
We have switched the DNS to point SMS to the new servers running the new system. Any issues, please let support know.
Started 14 Apr 2014
Previously expected 01 May 2014

11 Apr 2014 15:50:28
Details
11 Apr 2014 15:53:42
There is a problem with the C server and it needs to be restarted again after the maintenance yesterday evening. We are going to do this at 17:00 as we need it to be done as soon as possible. Sorry for the short notice.
Started 11 Apr 2014 15:50:28

07 Apr 2014 13:45:09
Details
07 Apr 2014 13:52:31
We will be carrying out some maintenance on our 'C' SIP server outside office hours. It will cause disruption to calls, but is likely only to last a couple of minutes and will only affect calls on the A and C servers. It will not affect calls on our "voiceless" SIP platform or SIP2SIM. We will do this on Thursday evening at around 22:30. Please contact support if you have any questions.
Update
10 Apr 2014 23:19:59
Completed earlier this evening.
Started 07 Apr 2014 13:45:09
Previously expected 10 Apr 2014 22:45:00

25 Sep 2013
Details
18 Sep 2013 16:32:41
We have received notification that Three's network team will be carrying out maintenance on one of the nodes that routes our data SIM traffic between 00:00 and 06:00 on Weds 25th September. Some customers may notice a momentary drop in connections during this time as any SIMs using that route will disconnect when the link is shut down. Any affected SIMs will automatically take an alternate route when they try and reconnect. Unfortunately, we have no control over the timing of this as it is dependent on the retry strategy of your devices. During the window, the affected node will be offline therefore SIM connectivity should be considered at risk throughout.
Started 25 Sep 2013

[DNS, Email and Web Hosting] - At Risk Period for Web Hosting - Open
Details
21 Feb 14:43:14
We are carrying out maintenance on our customer facing web servers during this Thursday's maintenance window. We expect no more than a couple of minutes of downtime but web services should be considered "at risk" during the work.
Previously expected 23 Feb 22:00:00

Details
06 Jul 2015 12:49:42
We have been advised by Three of works on their network (22:30 8th July to 04:50 9th July 2015) Which will involve rerouting traffic from one of their nodes via alternate paths in their core. Although connections should automatically reroute, there will be brief amounts of packet loss. As a result, some partners may experience dropped connections. Any device reconnecting will automatically be routed via a new path. We apologise for any inconvenience this may cause and the short notice of this advisory.

Details
12 Feb 2015 15:57:26
We have received the below PEW notification from one of our carriers that we take voice SIMS from. We have been advised by one of our layer 2 suppliers of emergency works to upgrade firmware on their routers to ensure ongoing stability. This will cause short drops in routing during the following periods: 00:00 to 06:00 Fri 13th Feb 00:00 to 04:00 Mon 16th Feb Although traffic should automatically reroute within our core as part of our MPLS infrastructure, some partners may experience disruption to SIM connectivity due to the short heartbeats used on SIM sessions.

28 Mar 16:00:00
Details
28 Mar 17:43:33

Outgoing external SMS were not working for a period from yesterday afternoon (not exactly sure of time) until this afternoon. We have identified the cause of the problem and it has been rectified. This will have impacted our normal SMS line/up down notifications as well.

The status code returned by our API would have indicated no parts of message sent. The messages are not queued and so will not be re-sent. They were not charged for.

Sorry for any inconvenience. We are looking in to ways we can pick up issues like this sooner in future.

Obviously we appreciate the fault report from the customer that made us aware of this issue.

Started 27 Mar 15:00:00
Closed 28 Mar 16:00:00
Previously expected 28 Mar 16:00:00

27 Mar 09:30:00
Details
19 Feb 18:35:15
We have seen some cases with degraded performance on some TT lines, and we are investigating. Not a lot to go on yet, but be assured we are working on this and engaging the engineers within TT to address this.
Update
21 Feb 10:13:20

We have completed further tests and we are seeing congestion manifesting itself as slow throughput at peak times (evenings and weekends) on VDSL (FTTC) lines that connect to us through a certain Talk Talk LAC.

This has been reported to senior TalkTalk staff.

To explain further; VDSL circuits are routed from TalkTalk to us via two LACs. We are seeing slow thoughput at peak times on one LAC and not the other.

Update
27 Feb 11:08:58
Very often with congestion it is easy to find the network port or system that is overloaded but so far, sadly, we've not found the cause. A&A staff and customers and TalkTalk network engineers have done a lot of checks and tests on various bits of the backhaul network but we are finding it difficult to locate the cause of the slow throughput. We are all still working on this and will update again tomorrow.
Update
27 Feb 13:31:39
We've been in discussions with other TalkTalk wholesalers who have also reported the same problem to TalkTalk. There does seem to be more of a general problem within the TalkTalk network.
Update
27 Feb 13:32:12
We have had an update from TalkTalk saying that based on multiple reports from ISPs that they are investigating further.
Update
27 Feb 23:21:21
Further tests this evening by A&A staff shows that the throughput is not relating to a specific LAC, but that it looks like something in TalkTalk is limiting single TCP sessions to 7-9M max during peak times. Running single iperf tests results in 7-9M, but running ten at the same time can fill a 70M circuit. We've passed these findings on to TalkTalk.
Update
28 Feb 09:29:56
As expected the same iperf throughput tests are working fine this morning. TT are shaping at peak times. We are pursuing this with senior TalkTalk staff.
Update
28 Feb 11:27:45
TalkTalk are investigating. They have stated that circuits should not be rate limited and that they are not intentionally rate limiting. They are still investigating the cause.
Update
28 Feb 13:14:52
Update from TalkTalk: Investigations are currently underway with our NOC team who are liaising with Juniper to determine the root cause of this incident.
Update
1 Mar 16:38:54
TalkTalk are able to reproduce the throughput problem and investigations are still on going.
Update
2 Mar 16:51:12
Some customers did see better throughput on Wednesday evening, but not everyone. We've done some further testing with TalkTalk today and they continue to work on this.
Update
2 Mar 22:42:27
We've been in touch with the TalkTalk Network team this evening and have been performing further tests (see https://aastatus.net/2363 ). Investigations are still ongoing, but the work this evening has given a slight clue.
Update
3 Mar 14:24:48
During tests yesterday evening we saw slow throughput when using the Telehouse interconnect and fast (normal) throughput over Harbour Exchange interconnect. Therefore, this morning, we disabled our Telehouse North interconnect. We will carry on running tests over the weekend and we welcome customers to do the same. We are expecting throughput to but fast for everyone. We will then liaise with TalkTalk engineers regarding this on Monday.
Update
6 Mar 15:39:33

Tests over the weekend suggest that speeds are good when we only use our Harbour Exchange interconnect.

TalkTalk are moving the interconnect we have at Telehouse to a different port at their side so as to rule out a possible hardware fault.

Update
6 Mar 16:38:28
TalkTalk have moved our THN port and we will be re-testing this evening. This may cause some TalkTalk customers to experience slow (single thread) downloads this evening. See: https://aastatus.net/2364 for the planned work notice.
Update
6 Mar 21:39:55
The testing has been completed, and sadly we still see slow speeds when using the THN interconnect. We are now back to using the Harbour Exchange interconnect where we are seeing fast speeds as usual.
Update
8 Mar 12:30:25
Further testing happening today: Thursday evening https://aastatus.net/2366 This is to try and help narrow down where the problem is occurring.
Update
9 Mar 23:23:13
We've been testing, tis evening, this time with some more customers, so thank you to those who have been assisting. (We'd welcome more customers to be involved - you just need to run an iperf server on IPv4 or IPv6 and let one of our IPs through your firewall - contact Andrew if you're interested). We'll be passing the results on to TalkTalk, and the investigation continues.
Update
10 Mar 15:13:43
Last night we saw some line slow and some line fast, so having extra lines to test against should help in figuring out why this is the case. Quite a few customers have set up iperf server for us and we are now testing 20+ lines. (Still happy to add more). Speed tests are being run three times an hour and we'll collate the results after the weekend and will report back to TalkTalk the findings.
Update
11 Mar 20:10:21
Update
13 Mar 15:22:43

We now have samples of lines which are affected by the slow throughput and those that are not.

Since 9pm Sunday we are using the Harbour Exchange interconnect in to TalkTalk and so all customers should be seeing fast speeds.

This is still being investigated by us and TalkTalk staff. We may do some more testing in the evenings this week and we are continuing to run iperf tests against the customers who have contacted us.
Update
14 Mar 15:59:18

TalkTalk are doing some work this evening and will be reporting back to us tomorrow. We are also going to be carrying out some tests ourselves this evening too.

Our tests will require us to move traffic over to the Telehouse interconnect, which may mean some customers will see slow (single thread) download speeds at times. This will be between 9pm and 11pm

Update
14 Mar 16:45:49
This is from the weekend:

Update
17 Mar 10:42:28
We've stopped the iperf testing for the time being. We will start it back up again once we or TalkTalk have made changes that require testing to see if things are better or not, but at the moment there is no need for the testing as all customers should be seeing fast speeds due to the Telehouse interconnect not being in use. Customers who would like quota top-ups, please do email in.
Update
17 Mar 18:10:41
To help with the investigations, we're also asking for customers with BT connected FTTC/VDSL lines to run iperf so we can test against them too - details on https://support.aa.net.uk/TTiperf Thank you!
Update
20 Mar 12:54:02
Thanks to those who have set up iperf for us to test against. We ran some tests over the weekend whilst swapping back to the Telehouse interconnect, and tested BT and TT circuits for comparison. Results are that around half the TT lines slowed down but the BT circuits were unaffected.

TalkTalk are arranging some further tests to be done with us which will happen Monday or Tuesday evening this week.

Update
22 Mar 09:37:30
We have scheduled testing of our Telehouse interlink with TalkTalk staff for this Thursday evening. This will not affect customers in any way.
Update
22 Mar 09:44:09
In addition to the interconnect testing on Thursday mentioned above, TalkTalk have also asked us to retest DSL circuits to see if they are still slow. We will perform these tests this tonnight, Wednesday evening.

TT have confirmed that they have made a configuration change on the switch at their end in Telehouse - this is the reason for the speed testing this evening.

Update
22 Mar 12:06:50
We'll be running iperf3 tests against our TT and BT volunteers this evening, very 15 minutes from 4pm through to midnight.
Update
22 Mar 17:40:20
We'll be changing over to the Telehouse interconnect between 8pm and 9pm this evening for testing.
Update
23 Mar 10:36:06

Here are the results from last night:

And BT Circuits:

Some of the results are rather up and down, but these lines are in use by customers so we would expect some fluctuations, but it's clear that a number of lines are unaffected and a number are affected.

Here's the interesting part. Since this problem started we have rolled out some extra logging on to our LNSs, this has taken some time as we only update one a day. However, we are now logging the IP address used at our side of L2TP tunnels from TalkTalk. We have eight live LNSs and each one has 16 IP addresses that are used. With this logging we've identified that circuits connecting over tunnels on 'odd' IPs are fast, whilst those on tunnels on 'even' IPs are slow. This points to a LAG issue within TalkTalk, which is what we have suspected from the start but this data should hopefully help TalkTalk with their investigations.

Update
23 Mar 16:27:28
As mentioned above, we have scheduled testing of our Telehouse interlink with TalkTalk staff for this evening. This will not affect customers in any way.
Update
23 Mar 22:28:53

We have been testing the Telehouse interconnect this evening with TalkTalk engineers. This involved a ~80 minute conference call and setting up a very simple test of a server our side plugged in to the switch which is connected to our 10G interconnect, and running iperf3 tests against a laptop on the TalkTalk side.

The test has highlighted a problem at the TalkTalk end with the connection between two of their switches. When plugged in to the second switch we got about 300Mbit/s, but when their laptop was in the switch directly connected to our interconnect we got near full speed or around 900Mb/s.

This has hopefully given them a big clue and they will now involve the switch vendor for further investigations.

Update
23 Mar 23:02:34
TalkTalk have just called us back and have asked us to retest speeds on broadband circuits. We're moving traffic over to the Telehouse interconnect and will test....
Update
23 Mar 23:07:31
Initial reports show that speeds are back to normal! Hooray! We've asked TalkTalk for more details and if this is a temporary or permanent fix.
Update
24 Mar 09:22:13

Results from last night when we changed over to test the Telehouse interlink:

This shows that unlike the previous times, when we changed over to use the Telehouse interconnect at 11PM speeds did not drop.

We will perform hourly iperf tests over the weekend to be sure that this has been fixed.

We're still awaiting details from TalkTalk as to what the fix was and if it is a temporary or permanent fix.

Update
24 Mar 16:40:24
We are running on the Telehouse interconnect and are running hourly iperf3 tests against a number of our customers over the weekend. This will tell us if the speed issues are fixed.
Update
27 Mar 09:37:12

Speed tests against customers over the weekend do not show the peak time slow downs, this confrims that what TalkTalk did on Thursday night has fixed the problem. We are still awaiting the report from TalkTalk regarding this incident.

The graph above shows iperf3 speed test results taken once an hour over the weekend against nearly 30 customers. Although some are a bit spiky we are no longer seeing the drastic reduction in speeds at peak time. The spikyness is due to the lines being used as normal by the customers and so is expected.

Update
28 Mar 10:52:25
We're expecting the report from TalkTalk at the end of this week or early next week (w/b 2017-04-03).
Resolution This has been fixed, we're awaiting the full report from TalkTalk.
Started 18 Feb
Closed 27 Mar 09:30:00
Cause TT

19 Mar 11:00:00
Details
16 Mar 11:02:52
We will be performing software upgrades to two of our core network switches on Sunday morning in Telehouse. This will cause a few minutes of disruption for each upgrade. This work will happen between 10AM and noon on Sunday morning. We will have staff at the datacentre overseeing this work.
Update
19 Mar 09:54:14
This work will start shortly and we do expect it to cause a few re-connects for customers whilst the upgrades are happening.
Update
19 Mar 10:44:01
The first switch has been updated, the second switch will be updated in a few minutes.
Update
19 Mar 10:52:38
The second switch has now been upgraded. The upgrade work is complete, we're performing final checks before closing this incident.
Resolution This work has been completed. We were expecting this to cause a brief outage for customers, but in practice the outage did impact customers more that we would have liked. Customers on BT backhaul were more impacted than those on TalkTalk backhaul for example. This type of upgrade is rare but we will be looking in to how things can be improved. We do apologise to customers who were impacted by this.
Started 19 Mar 10:00:00
Closed 19 Mar 11:00:00

14 Mar 21:10:00
Details
14 Mar 21:05:28
Looks like we just had some sort of blip affecting broadband customers. We're investigating.
Resolution This was a LNS crash, and so affected customers on the "i" LNS. The cause is being investigated, but preliminary investigations show that it's probably a problem that is fixed in software that is scheduled to be loaded on to this LNS in a couple of days time as part of the rolling software update that we're performing at the moment.
Broadband Users Affected 12%
Started 14 Mar 21:00:57
Closed 14 Mar 21:10:00

10 Mar 15:27:30
Details
10 Mar 09:06:46
We've had a few reports of calls having one way audio - we're currently investigating.
Update
10 Mar 09:23:00
This looks to be carrier specific so we've disable that carrier for the moment - calls are sounding better. We'll carry on investigating.
Update
10 Mar 09:39:41
The issue seems to be that outbound calls via one of our carriers are failing to connect. We have disabled this carrier for the time being. Our carrier reports that they are currently experiencing DNS issues.
Update
10 Mar 15:30:50
The issue with our supplier has now been resolved and they have been reintroduced into production.
Started 10 Mar 09:01:00
Closed 10 Mar 15:27:30

7 Mar 10:27:14
Details
7 Mar 10:27:14

Customers expecting Direct Debit collection on Wednesday 8th March will actually have the collection made on Thursday 9th March.

We do apologise for any confusion this may cause.

As this is within the permitted 3 working day window in which we can make a collection we will not be sending separate individual notices of this change.

Started 8 Mar

6 Mar 21:37:45
Details
6 Mar 16:41:32
As part of the slow throughput problem described in https://aastatus.net/2358 we will be performing further tests this evening. This will involve moving TalkTalk traffic to the interconnect which we believe is slow. Customers may see poor speeds this evening during the times that we carry out tests. The tests are expected to last less than 30 minutes between 8 and 10 pm.
Resolution This work has been completed.
Started 6 Mar 20:00:00
Closed 6 Mar 21:37:45

7 Mar 12:02:32
Details
2 Mar 16:31:46
We will be doing some spring cleaning on Monday 6th March at our Maidenhead datacentre. This involves removing old equipment. We don't expect any interruptions but it is an at-risk period.
Resolution This work has been completed.
Started 6 Mar 18:30:00
Closed 7 Mar 12:02:32

2 Mar 22:10:44
Details
2 Mar 21:48:39
Relating to https://aastatus.net/2358 we are undergoing currently in an emergency at-risk period as we perform some tests along side TalkTalk staff. We don't expect any problems, but this work involves re-routing TalkTalk traffic within our network. This work is happening now. Sorry for the no notice.
Update
2 Mar 21:53:05
We have successfully and cleanly moved all TalkTalk traffic off our THN interconnect and on to our HEX Interconnect. (Usually we use both all the time, but for this testing we are forcing traffic through the HEX side)
Update
2 Mar 21:55:52
We're bringing back routing across both links now...
Update
2 Mar 22:03:40
We are now moving traffic to our THN interconnect.
Resolution We're now back to using both the TalkTalk links. Tests completed.
Started 2 Mar 21:46:17
Closed 2 Mar 22:10:44

28 Feb 11:44:16
Details
28 Feb 11:44:16

We cordially invite customers, friends, peers, associates, and small furry creatures from Alpha Centauri to come along, at any time between 2pm and 10pm, and to stay for as long as you like, drink, eat (there will be a BBQ) and be merry.

We wanted to just keep an eye on number of attendees, hence the EventBrite. The tickets are free though, of course.

Sun 2 April 2017 14:00 – 22:00 BST

https://www.eventbrite.co.uk/e/aaispissup-aka-nerdstock-2017-tickets-32350365815

Started 28 Feb 11:00:00

25 Feb 18:58:41
Details
25 Feb 15:53:47
Our accounts systems, and hence ordering, are off line briefly. A very minor change has proven to take rather longer than expected, and at this stage we have no choice but to simply wait for the process to complete. Sorry for any inconvenience.
Update
25 Feb 16:46:16
This is progressing, but could take until 6pm at the current rate. Apologies for any inconvenience.
Update
25 Feb 18:00:08
We are making progress, but will be a while longer. Sorry for inconvenience.
Resolution Finally completed, sorry for hassle.
Started 25 Feb 15:30:00
Closed 25 Feb 18:58:41
Previously expected 25 Feb 17:00:00

13 Jun 2015 10:57:07
Details
12 Mar 2015 09:48:01
Our wiki at http://wiki.aa.org.uk/ will be down for a while today due to an internal PEW. Sorry for any inconvenience.
Closed 13 Jun 2015 10:57:07

13 Jun 2015 10:57:07
Details
12 Jun 2015 11:00:23
Our office connectivity blipped. We have internet again, but our phones are down. We're investigating. This is not customer affecting.
Update
12 Jun 2015 11:06:49
Phones are working again.
Closed 13 Jun 2015 10:57:07
Previously expected 12 Jun 2015 14:57:07

16 Feb 15:00:00
Details
16 Feb 16:00:49
We have spotted some odd latency that was affecting two of our LNSs (A and B gormless). These were also visible, as you would expect, on the graphs shown for people's lines.
Resolution We believe we have addressed the issue now, sorry for any inconvenience.
Started 15 Feb 02:00:00
Closed 16 Feb 15:00:00
Previously expected 16 Feb 15:00:00

13 Feb 10:02:12
[Broadband] - LNS blip - Closed
Details
13 Feb 10:00:36
We just had an LNS blip - this would have caused some customers to drop PPP and reconnect.
Resolution There have been a few LNS blips recently. However, we do know the cause and have a software update to roll out which will fix the problem.
Started 13 Feb 09:56:00
Closed 13 Feb 10:02:12

9 Feb 13:00:00
Details
9 Feb 12:54:39
There appears to be a power outage in maidenhead. This is affecting all our VOIP and Email services. More information to come.
Update
9 Feb 12:59:03
Power is back. VOIP and Email services have been restored. Postmortem to come.
Resolution Power was restored within a few minutes - we do apologise for this unexpected power outage - the cause is suspected to be a faulty network switch which tripped the circuit breaker when it was lightly touched by one of our staff in the datacentre! The switch has been powered off and will be removed from the datacentre out of hours.
Started 9 Feb 12:52:45
Closed 9 Feb 13:00:00

4 Feb 09:32:03
[Broadband] - LNS blip - Closed
Details
4 Feb 09:14:11
We had an LNS reset and lines will have re-connected for some customers. We're investigating the cause.
Resolution We have found the cause, and expect a permanent fix to be deployed on next round of LNS upgrades.
Broadband Users Affected 12%
Started 4 Feb 09:12:00
Closed 4 Feb 09:32:03

2 Feb 21:19:15
Details
2 Feb 21:19:15
http://www.euronews.com/2017/01/27/adrian-kennard-challenging-surveillance

31 Jan 16:29:00
Details
31 Jan 16:24:03
Customers on one of our LNSs just lost their connection and would have logged back in again shortly after. We're investigating the cause
Update
31 Jan 16:41:32
Customers are back online. The CQM graphs for the day would have been lost for these lines. We do apologise for the inconvenience this caused.
Broadband Users Affected 12%
Started 31 Jan 16:16:00
Closed 31 Jan 16:29:00

27 Jan 16:55:00
Details
26 Jan 13:56:39
Our Data only SIMs will be 4G LTE enabled over the coming days. They will still support 3G as they do now but where available 4G will also be supported.
Resolution This has been done.
Started 26 Jan 13:50:00
Closed 27 Jan 16:55:00

24 Jan 18:15:00
Details
24 Jan 16:11:45
Some TalkTalk connected customers have high packetloss on their lines from around 3pm today. These lines are in the Chippenham/Bristol area. If affected you'll be experiencing slow speeds.
Update
24 Jan 16:19:23

Affected lines are looking like this. This shows the fault started just after 9am, but from 3pm there is severe packet loss.

Update
24 Jan 18:32:37
TalkTalk say "NOC & Network engineering are currently investigating congestion and packet loss across the core network." More details to follow.
Update
24 Jan 18:45:58
Problem looks fixed as of 18:15
Update
25 Jan 08:48:01
(This also affected some other circuits in other parts of the country.)
Resolution From TalkTalk: Root cause has not currently been identified.. The (TalkTalk) NOC engaged Network Support, who investigated and added a new link in order to alleviate congestion. The B2B Enterprise team are currently retesting with the affected customers and initial feedback indicates that this has resolved the issue
Broadband Users Affected 1%
Started 24 Jan 15:00:00
Closed 24 Jan 18:15:00

23 Jan 21:50:24
Details
23 Jan 21:17:18
Since 20:23 we're seeing ~20% packet loss on TalkTalk connected VDSL circuits, these customers will be experiencing very slow speeds. These are in the SALTERTON/DORCHESTER/WESTBOURNE/CRADDOCK area. We have contacted TalkTalk regarding this.
Update
23 Jan 21:50:48
This looks to have been fixed.
Resolution This was due to a card failure at Yeovil
Started 23 Jan 20:23:00
Closed 23 Jan 21:50:24
Cause TT

23 Jan 17:03:31
[DNS, Email and Web Hosting] - SSL Certificates Updating - Info
Details
23 Jan 17:03:31
We're updating SSL certificates for our email servers today. The old serial number is 124247. The new serial number is 12AD7B. Users who don't have the CAcert root certificate installed may see errors. This does not affect webmail or outgoing SMTP. Details on http://aa.net.uk/cacert.html

24 Jan
Details
23 Jan 08:21:07
Sorry to say that the new LNSs (H and I) were not archiving graphs and so the CQM graphs for customers on these LNSs have not been recorded.
Resolution Fixed
Started 16 Jan
Closed 24 Jan
Previously expected 24 Jan

18 Jan 20:30:00
Details
18 Jan 20:36:56
We're looking in to why some broadband lines and mobile SIMs dropped and reconnected at around 20:30 this evening....
Resolution Lines are back online, most reconnected within a few minutes. This blip affected about 1/8th of our customers, and was caused by one of our LNS restarting unexpectedly. We do apologise for the inconvenience this caused. We'll be investigating the cause of this.
Started 18 Jan 20:35:58
Closed 18 Jan 20:30:00
Cause LNS restart/crash

17 Jan 09:48:47
Details
17 Jan 08:35:28
Once again we are seeing an issue where TT lines are failing to connect. This is not impacting lines that are currently connected unless they drop and reconnect for some reason. This looks like only half of TTs LACs that is impacted, and so lines are eventually reconnecting after several tries. It has been reported to TalkTalk and we will update this post as soon as we get an update.
Update
17 Jan 09:50:18
All affected lines appear to have reconnected.
Resolution We are still investigating the root cause
Broadband Users Affected 1%
Started 17 Jan 01:00:00
Closed 17 Jan 09:48:47
Previously expected 17 Jan 12:31:59

13 Jan 05:16:36
Details
12 Dec 2016 16:21:48
Here are our opening times over Christmas and the New Year.
Fri 23rd Open as Usual
Sat 24th Informal (Some Support staff monitoring IRC and Email)
Sun 25th Closed
Mon 26th Closed
Tue 27th Closed
Wed 28th Open
Thu 29th Open
Fri 30th Open
Sat 31st Informal (Some Support staff monitoring IRC and Email)
Sun 1st Closed
Mon 2nd Closed
Tue 3rd Open...
We wish all our customers a very merry Christmas and a happy new year!
Update
23 Dec 2016 15:25:59
Due to the low volume of calls, our offices are now closed. You can still email support@aa.net.uk or text 01344 400 999 to raise a support ticket. If you believe there is a major issue, that affects multiple customers and that is not shown here, please start your text with MSO which will alert staff. Merry Christmas!
Started 12 Dec 2016 16:00:00

10 Jan 14:31:24
Details
10 Jan 08:49:16
We are seeing an issue where TT lines are failing to connect - this is not impacting lines that are currently connected unless they drop and reconnect for some reason. This looks like only half of TTs LACs that is impacted, and so lines are eventually reconnecting after several tries.
Update
10 Jan 09:43:21
Talk Talk engineers are working on the issue now.
Update
10 Jan 10:59:21
TalkTalk have raised an incident and are still working on resolving this.
Update
10 Jan 11:39:56
A few lines have logged back in, no word back from TT yet.
Update
10 Jan 11:49:57
It looks like all lines are back, we'll update this post again when we get further news from TalkTalk.
Update
10 Jan 15:32:19
Update from TalkTalk regarding this outage: "Investigations by our NOC and network support team identified that a routing card had failed on a router. Routing functionality was moved onto an alternative card by our network support team to restore service. This was completed at 11:46hrs and monitoring has not identified any further issues"
Started 10 Jan 03:20:00
Closed 10 Jan 14:31:24

7 Jan 11:40:55
Details
7 Jan 10:30:49
There seems to be a major issue in the Maidenhead data centre at present, we are investigating.
Update
7 Jan 11:15:07
We have an engineer who should be on site in a few minutes.
Update
7 Jan 11:19:23
Voice services (also in Maidenhead) seem unaffected, but there may be some disruption shortly whilst we work on this problem.
Resolution Engineer has found the problem and reset it. All looking good now.
Started 7 Jan 10:17:00
Closed 7 Jan 11:40:55

16 Jan 15:35:10
Details
4 Jan 13:44:20
This post is mainly for our wholesalers. We are adding two additional LNSs this week and so wholesalers and other customers that we relay L2TP connections to may need to update their access lists/firewalls to allow the connections from the new IP addresses. The two new LNS are:

90.155.53.58 2001:8b0:0:53::58 90.155.53.59 2001:8b0:0:53::59

Resolution The new LNSs are now in use.
Started 6 Jan 13:00:00
Closed 16 Jan 15:35:10

28 Jan 10:43:34
Details
3 Jan 09:36:03
We are doing some general router upgrades. As usually these should cause little or no disruption, and we will be doing LNS upgrades as a rolling upgrade one per night. We are also going to be bringing two more LNSs on-line to increase our capacity further.
Update
3 Jan 17:39:54
There will only be a few routers this evening, tomorrow we will look to bring in the new LNSs.
Update
4 Jan 02:34:01
Further updates this morning mean we have now completed around half off our core router upgrades.
Update
4 Jan 13:50:30
It looks like we will start LNS rolling updates Friday night instead. Testing today has gone well though.
Update
4 Jan 18:42:46
Core routers all upgraded, only LNSs now.
Update
6 Jan 17:01:27
Rolling LNS updates will start tonight, once this is complete we will bring the two new LNSs on line.
Update
12 Jan 18:03:46
LNS roll over is complete, we have some further updates and will be bringing new LNSs on-line over the next few days.
Update
14 Jan 15:10:22
Two additional LNSs are on-line now. We expect to do another LNS roll over soon to spread the load evenly.
Update
16 Jan 15:36:05
We will be running a rolling LNS switch over starting tonight.
Update
17 Jan 08:56:51
LNS switch for a few customers on "H" LNS to "I" LNS did not work properly last night, this has been fixed, and so there may have been more than one PPP restart over night, and one just before 9am. Looks good now. Sorry for any inconvenience.
Resolution Upgrades completed
Started 3 Jan 18:00:00
Closed 28 Jan 10:43:34
Previously expected 24 Jan

20 Dec 2016 05:21:26
Details
19 Dec 2016 19:11:18
Some TT lines are down and they are aware and working on it.
Update
19 Dec 2016 21:08:15
From TT at 20:47... Following initial investigations conducted by the NOC it has been identified that there are a number of Virgin Media Backhaul circuits down, this has been reported to Virgin Media who have an engineer on site at Hammersmith and another engineer en-route to Ealing. The cause of the loss of service is yet to be determined by Virgin Media. We will provide a further update as soon as more information is available. Summary Network monitoring has identified that a number of Phone, Broadband, TV, B2B, FTTC, AOL and TUK customers in the West London area are experiencing a loss of service. Impacted customers will be unable to use Phone, TV and/or their data services for the duration of this incident. The full list of impacted exchanges are still being collated. However, impacted exchanges identified so far are Hammersmith, Fulham, Nine Elms, Bayswater, Earls Court, Pimlico, Chelsea, South Kensington, Kensington Gardens, Shepherds Bush, Battersea, Belgravia and Sloane.
Resolution Update from talktalk:- The NOC have completed their checks and have advised that all of the affected customers have now had their service restored. They have liaised with Virgin Media who have confirmed that they have completed all of their restoration work. This incident will now be resolved awaiting confirmation of the full root cause, which will be communicated out once available.
Started 19 Dec 2016 17:33:00
Closed 20 Dec 2016 05:21:26