Order posts by limited to posts

Yesterday 16:11:36
Details
Posted: Yesterday 15:29:48
Incoming VOIP calls were intermittently not working to registered SIP phones. The problem was caused by registration data not syncing correctly between RADIUS servers and has been fixed. It would have resulted in some calls going to voicemail even when the callee's phone was successfully registered. Sorry for any inconvenience.
Update
Yesterday 16:11:36
Coincidentally one of our carriers also has had call issues during, and after, we fixed our RADIUS problem which has caused a bit of confusion!
Started Yesterday 13:00:00

31 Aug 01:00:00
Details
Posted: 31 Aug 08:36:41
TalkTalk have lots of small planned work projects happening at the moment. These generally happen from midnight and affect a small number of exchanges at a time. The work does cause service to stop for 30 minutes or longer. TalkTalk publish this information on their status page: https://managed.mytalktalkbusiness.co.uk/network-status/
We are looking at ways of adding these planned works to the Control Pages so as to make it clearer for customers if they are going to be afected.
Started 31 Aug 01:00:00
Expected close 31 Oct 07:00:00

22 Aug 13:02:21
[DNS, Email and Web Hosting] IMAP indexing problem - Open
Details
Posted: 22 Aug 13:02:21
We are working on solving a problem that we're currently seeing with IMAP indexing on our mail servers. The symptoms customers are likely to see are small oddities such as emails appearing not to move between folders, or appearing twice. This problem is only index related and so doesn't actually affect the emails themselves. This problem is not causing email to be lost.
Update
22 Aug 16:47:26
We are still investigating a proper fix for this problem, but in the mean time we are making changes that should work around it. There is a small risk that if you are using Sieve filtering that it may stop working. If that is the case, please contact support for assistance

29 Jun 22:10:54
Details
Posted: 26 Apr 10:16:02
We have identified packet loss across our lines at MOSS SIDE that occurs between 8pm to 10pm. We have raised this with BT who suggest that they hope to have this resolved on May 30th. We will update you on completion of this work.
Broadband Users Affected 0.15%
Started 26 Apr 10:13:41 by BT
Update was expected 30 May 12:00:00
Previously expected 30 May (Last Estimated Resolution Time from BT)

29 Jun 20:00:00
Details
Posted: 27 Jun 16:50:52
Our voice SIM carrier is carrying out emergency maintenance on their GGSNs between 20:00 and 00:00 on the 29th of June. This is expected to cause at least 15 minutes of downtime for voice SIMs.
Update
1 Jul 10:00:59
Our carrier has planned additional work which may affect users on the 5th, 6th, 11th, and 12th of July in the early hours of the morning.
Started 29 Jun 20:00:00 by Carrier
Previously expected 30 Jun 00:05:00

28 Jun 02:00:00
Details
Posted: 15 Jun 15:38:56
We have been advised of essential scheduled partner maintenance to upgrade core infrastructure. The window is as follows; Service Impact Start Time: 28/6/2017 02:00 Service Impact End Time: 28/6/2017 04:30 Impact Time Expected: 30 minutes Throughout the duration of this window, customers may see disruption of up to 30 minutes to their 3G/4G data services in the following areas: Ilford, Barking, Dagenham, Woolwich Thamesmead, Bexleyheath, East Ham, West Ham, Poplar, Stepney, Bow, Greenwich Deptford, Lewisham and surrounding areas. If you have any questions regarding this maintenance, please don't hesitate to contact our support team on 03333 400 999.
Started 28 Jun 02:00:00 by Carrier
Previously expected 28 Jun 04:30:00 (Last Estimated Resolution Time from Carrier)

12 Jun 10:01:38
[DNS, Email and Web Hosting] Issues with outbound email - Open
Details
Posted: 1 Jun 10:27:24
One of the key IP reputation services appears to have blacklisted both of our outgoing servers which is causing issues for some email customers. We have noticed this occurring for any emails destined for icloud.com or mac.com email addresses, however some other servers may be affected. We have contacted the service in question requesting that we are unblocked. We will update this status page as any updates become available.
Update
1 Jun 11:01:17
We have adjusted our outgoing servers to use different IPs to circumvent the block. We would now suggest that mail customers attempt to resend any failed messages.
Update
12 Jun 09:41:39
We are seeing this again unfortunately. We are investigating the cause of this, however we may have to wait for any blocks to expire.
Update
12 Jun 10:01:38
We've applied a work around for the time being and sending email should be fine. We'll investigate the cause of why we were blocked in the first place and why our normal methods for blocking junk email in the first place did not work this time.
Started 1 Jun 06:55:00 by AA Staff

25 May 02:00:00
Details
Posted: 22 May 16:29:22
Due to upstream upgrade work on core infrastructure, there may be up to 2 hours 30 minutes disruption to data SIM services on the morning of Thursday the 25th of May between the hours of 2:00 and 4:30.
Started 25 May 02:00:00 by Carrier
Previously expected 25 May 04:30:00

6 Apr 01:00:00
Details
Posted: 30 Mar 11:19:03
Due to maintenance being performed by a carrier, disruption of up to 15 minutes to 3G/4G data services is expected some time between 01:00 and 03:00 on the 6th of April.
Started 6 Apr 01:00:00
Previously expected 6 Apr 03:00:00

9 Mar 20:00:00
Details
Posted: 8 Mar 12:29:14

We continue to work with TalkTalk to get to the bottom of the slow throughput issue as described on https://aastatus.net/2358

We will be performing some routing changes and tests this afternoon and this evening, we are not expecting this to cause any drops for customers, but this evening there will be times when throughput for 'single thread' downloads will be slow. Sorry for the short notice, please bear with us, this is being a tricky fault to track down.

Update
8 Mar 22:39:39
Sorry, due to TalkTalk needing extra time to prepare for their changes this work has been moved to Thursday 9th evening.
Started 9 Mar 20:00:00
Update was expected 9 Mar 23:00:00

7 Feb 14:49:55
[DNS, Email and Web Hosting] SMTP Settings Change - Open
Details
Posted: 7 Feb 14:52:42
For historical reasons, our SMTP servers allow sending authenticated email without TLS. This is insecure, and doesn't belong on the modern internet as it is possible for the username and password to be intercepted by a third party. We will no longer allow this as of the 4th of July. We are emailing customers who seem to be using insecure settings to warn them about the change. We have a support site page about what settings to change here: https://support.aa.net.uk/Enable_TLS_on_smtp.aa.net.uk Please contact support if you have any questions.
Started 7 Feb 14:49:55
Previously expected 4 Jul 09:00:00

20 Sep 2016 11:26:07
Details
Posted: 20 Sep 2016 11:31:39
Our upstream provider have advised us that they will be carrying out firmware upgrades on core 3G infrastructure on 23rd September 2016 between 00:10 and 04:30 BST.
During this period data SIMs may briefly disconnect as sessions are migrated to other nodes in the upstream provider's network to facilitate the upgrades.
VoIP and SIMs Users Affected 25%
Started 20 Sep 2016 11:26:07 by Carrier
Update was expected 23 Sep 2016 11:30:00
Previously expected 23 Sep 2016 04:30:00 (Last Estimated Resolution Time from Carrier)

01 Sep 2016 17:18:26
Details
Posted: 01 Sep 2016 17:26:51
Our SMS gateway has always supported HTTPS but it still allows using HTTP only. As using HTTP only is probably a mistake these days we are going to change our gateway to redirect all HTTP requests to HTTPS next week. We don't expect anything to break as curl, etc., should "just work" but we're posting a PEW just in case anyone has a legacy script they think might need adjusting!
Update
09 Sep 2016 10:06:35
Unfortunately we've had to back out of making this change this week as it caused some unforeseen problems! We'll resolve those problems and make the change again soon.
Started 01 Sep 2016 17:18:26
Previously expected 07 Sep 2016 17:18:26

20 Jul 2016 09:58:43
[Maidenhead Colocation] Web and email outage - Open
Details
Posted: 04 Apr 2013 15:47:25

This is ongoing. We're investigating.

Update
04 Apr 2013 16:07:41

This should now be fixed. Please let support know if you see any problems, or have any questions.

Update
05 Apr 2013 09:15:04

This was resolved yesterday afternoon.

Update
20 Jul 2016 09:58:43
This may be related to a wider issue on the internet with a power outage at a major london data centre. Thos routing problems are still ongoing.
Started 04 Apr 2013 15:46:11

13 Apr 2015
Details
Posted: 09 Apr 2015 13:43:39

We have been advised by Three of works on their network which will involve rerouting traffic from one of their nodes via alternate paths in their core. Although connections should automatically reroute, there will be brief amounts of packet loss. As a result, some customers may experience dropped connections. Any device reconnecting will automatically be routed via a new path.

This only affects our data only SIMs.

Started 13 Apr 2015

08 Jan 2015 12:39:06
Details
Posted: 08 Jan 2015 12:49:24
We're going to remove the legacy fb6000.cgi page that was originally used to display CQM graphs on the control pages. This does not affect people who use the control pages as normal, but we've noticed that fb6000.cgi URLs are still being accessed occasionally. This is probably because the old page is being used to embed graphs into people's intranet sites, for example, but accessing graph URLs via fb6000.cgi has been deprecated for a long time. The supported method for obtaining graphs via automated means is via the "info" command on our API: http://aa.net.uk/support-chaos.html This is likely to affect only a handful of customers but, if you believe you're affected and require help with accessing the API, please contact support. We will remove the old page after a week (on 2015-01-15).
Update
09 Jan 2015 08:52:28
We'll be coming up with some working examples of using our CHAOS API to get graphs, we'll post an update here today or monday.
Update
12 Jan 2015 16:19:58
We have an example here: https://wiki.aa.net.uk/CHAOS
Started 08 Jan 2015 12:39:06 by AA Staff
Previously expected 15 Jan 2015 17:00:00

03 Jun 2014 17:00:00
Details
Posted: 03 Jun 2014 18:20:39
The router upgrades went well, and now there is a new factory release we'll be doing some rolling upgrades over the next few days. Should be minimal disruption.
Update
03 Jun 2014 18:47:21
First batch of updates done.
Started 03 Jun 2014 17:00:00
Previously expected 07 Jun 2014

14 Apr 2014
Details
Posted: 13 Apr 2014 17:29:53
We handle SMS, both outgoing from customers, and incoming via various carriers, and we are now linking in once again to SMS with mobile voice SIM cards. The original code for this is getting a tad worn out, so we are working on a new system. It will have ingress gateways for the various ways SMS can arrive at us, core SMS routing, and then output gateways for the ways we can send on SMS. The plan is to convert all SMS to/from standard GSM 03.40 TPDUs. This is a tad technical I know, but it will mean that we have a common format internally. This will not be easy as there are a lot of character set conversion issues, and multiple TPDUs where concatenation of texts is used. The upshot for us is a more consistent and maintainable platform. The benefit for customers is more ways to submit and receive text messages, including using 17094009 to make an ETSI in-band modem text call from suitable equipment (we think gigasets do this). It also means customers will be able to send/receive texts in a raw GSM 03.40 TPDU format, which will be of use to some customers. It also makes it easier for us to add other formats later. There will be some changes to the existing interfaces over time, but we want to keep these to a minimum, obviously.
Update
21 Apr 2014 16:27:23

Work is going well on this, and we hope to switch Mobile Originated texting (i.e. texts from the SIP2SIM) over to the new system this week. If that goes to plan we can move some of the other ingress texting over to the new system one by one.

We'll be updating documentation at the same time.

The new system should be a lot more maintainable. We have a number of open tickets with the mobile carrier and other operators to try and improve the functionality of texting to/from us. These cover things like correct handling of multi-part texts, and correct character set coding.

The plan is ultimately to have full UTF-8 unicode support on all texts, but that could take a while. It seems telcos like to mess with things rather than giving us a clean GSM TPDU for texts. All good fun.

Update
22 Apr 2014 08:51:09
We have updated the web site documentation on this to the new system, but this is not fully in use yet. Hopefully this week we have it all switched over. Right now we have removed some features from documenation (such as delivery reports), but we plan to have these re-instated soon once we have the new system handling them sensibly.
Update
22 Apr 2014 09:50:44
MO texts from SIP2SIM are now using the new system - please let support know of any issues.
Update
22 Apr 2014 12:32:07
Texts from Three are now working to ALL of our 01, 02, and 03 numbers. These are delivered by email, http, or direct to SIP2SIM depending on the configuration on our control pages.
Update
23 Apr 2014 09:23:20
We have switched over one of our incoming SMS gateways to the new system now. So most messages coming from outside will use this. Any issues, please let support know ASAP.
Update
25 Apr 2014 10:29:50
We are currently running all SMS via the new platform - we expect there to be more work still to be done, but it should be operating as per the current documentation now. Please let support know of any issues.
Update
26 Apr 2014 13:27:37
We have switched the DNS to point SMS to the new servers running the new system. Any issues, please let support know.
Started 14 Apr 2014
Previously expected 01 May 2014

11 Apr 2014 15:50:28
Details
Posted: 11 Apr 2014 15:53:42
There is a problem with the C server and it needs to be restarted again after the maintenance yesterday evening. We are going to do this at 17:00 as we need it to be done as soon as possible. Sorry for the short notice.
Started 11 Apr 2014 15:50:28

07 Apr 2014 13:45:09
Details
Posted: 07 Apr 2014 13:52:31
We will be carrying out some maintenance on our 'C' SIP server outside office hours. It will cause disruption to calls, but is likely only to last a couple of minutes and will only affect calls on the A and C servers. It will not affect calls on our "voiceless" SIP platform or SIP2SIM. We will do this on Thursday evening at around 22:30. Please contact support if you have any questions.
Update
10 Apr 2014 23:19:59
Completed earlier this evening.
Started 07 Apr 2014 13:45:09
Previously expected 10 Apr 2014 22:45:00

25 Sep 2013
Details
Posted: 18 Sep 2013 16:32:41
We have received notification that Three's network team will be carrying out maintenance on one of the nodes that routes our data SIM traffic between 00:00 and 06:00 on Weds 25th September. Some customers may notice a momentary drop in connections during this time as any SIMs using that route will disconnect when the link is shut down. Any affected SIMs will automatically take an alternate route when they try and reconnect. Unfortunately, we have no control over the timing of this as it is dependent on the retry strategy of your devices. During the window, the affected node will be offline therefore SIM connectivity should be considered at risk throughout.
Started 25 Sep 2013

[DNS, Email and Web Hosting] At Risk Period for Web Hosting - Open
Details
Posted: 21 Feb 14:43:14
We are carrying out maintenance on our customer facing web servers during this Thursday's maintenance window. We expect no more than a couple of minutes of downtime but web services should be considered "at risk" during the work.
Previously expected 23 Feb 22:00:00

Details
Posted: 06 Jul 2015 12:49:42
We have been advised by Three of works on their network (22:30 8th July to 04:50 9th July 2015) Which will involve rerouting traffic from one of their nodes via alternate paths in their core. Although connections should automatically reroute, there will be brief amounts of packet loss. As a result, some partners may experience dropped connections. Any device reconnecting will automatically be routed via a new path. We apologise for any inconvenience this may cause and the short notice of this advisory.

Details
Posted: 12 Feb 2015 15:57:26
We have received the below PEW notification from one of our carriers that we take voice SIMS from. We have been advised by one of our layer 2 suppliers of emergency works to upgrade firmware on their routers to ensure ongoing stability. This will cause short drops in routing during the following periods: 00:00 to 06:00 Fri 13th Feb 00:00 to 04:00 Mon 16th Feb Although traffic should automatically reroute within our core as part of our MPLS infrastructure, some partners may experience disruption to SIM connectivity due to the short heartbeats used on SIM sessions.

Yesterday 11:13:09
[DNS, Email and Web Hosting] Issues with email and webmail - Closed
Details
Posted: Yesterday 10:31:03
We are currently experiencing an issue affecting our incoming mail servers. Webmail will also be down as a result. We are currently investigating and will provide updates as they become available.
Update
Yesterday 11:13:22
Webmail and email access by IMAP/POP3 are looking better now. We apologise for the inconvenience this caused.
Started Yesterday 09:00:00 by AA Staff
Closed Yesterday 11:13:09

17 Sep 10:10:00
Details
Posted: 17 Sep 09:42:04
Latest from TalkTalk: BT advise their engineer is due on site at 08:40 to investigate, and they are still attempting to source a Fibre Precision Test Officer. Our field engineer has been called out and is en route to site (ETA 08:30).
Update
17 Sep 09:43:18
TalkTalk say affected area codes are: 01481, 01223, 01553, 01480, 01787, 01353 and maybe others. ( Impacted exchanges are Barrow, Buntingford, Bottisham, Burwell, Cambridge, Crafts Hill, Cheveley, Clare, Comberton, Costessey, Cherry Hinton, Cottenham, Dereham, Downham Market, Derdingham, Ely, Fakenham, Fordham Cambs, Feltwell, Fulbourn, Great Chesterford, Girton,Haddenham, Histon, Holt, Halstead, Harston, Kentford, Kings Lynn, Lakenheath, Littleport, Madingley, Melbourne, Mattishall, Norwich North, Rorston, Science Park, Swaffham, Steeple Mordon, Soham, Sawston, Sutton, South Wootton, Swavesey, Teversham, Thaxted, Cambridge Trunk, Trumpington, Terrington St Clements, Tittleshall, Willingham, Waterbeach, Watlington, Watton, Buckden, Crowland, Doddington, Eye, Friday Bridge, Glinton, Huntingdon, Long Sutton, Moulton Chapel, Newton Wisbech, Parson Drove, Papworth St Agnes, Ramsey Hunts, Sawtry, Somersham, St Ives, St Neots, Sutton Bridge, Upwell, Warboys, Werrington, Whittlesey, Woolley, Westwood, Yaxley, Ashwell, Gamlingay and Potton. )
Update
17 Sep 09:43:37
TalkTalk say: Our field engineer and BT field engineer have arrived at site with investigations to the root cause now underway. At this stage Incident Management is unable to issue an ERT until the engineers have completed their diagnostics.
Update
17 Sep 09:55:09
Some lines logged back it at around 09:48
Update
17 Sep 10:10:17
Most are back online now.
Resolution From TalkTalk: Our NOC advised that alarms cleared at 09:45 and service has been restored. Our Network Support has raised a case with Axians (vendor) as there appeared to be an issue between the interface cards in the NGE router and the backplane (which facilitates data flow from the interface cards through the NGE). This incident is resolved and will now be closed with any further root cause with the Problem Management process.
Started 17 Sep 06:20:00
Closed 17 Sep 10:10:00

13 Sep 16:37:14
Details
Posted: 13 Sep 16:17:34

I am pleased to confirm we have now launched "Quota Bonus"

The concept is simple, and applies to Home::1 and SoHo::1 on all levels including terabyte.

You start your billing month with your quota as normal, but get an extra bonus that is half of the unused quota, if any, from the previous month.

This allows people to build up a reserve and allow for occasional higher months without needing top-up.

Thanks to all of the customers for the feedback on my blog posts on this. --Adrian.

P.S. yes, it is sort of cumulative, see examples on http://aa.net.uk/broadband-quota.html

Started 13 Sep 16:15:00

8 Sep 01:00:00
Details
Posted: 7 Sep 23:04:00
Packet loss has been noted to some destinations, routed via LONAP. Our engineers are currently investigating and attempting to work around the loss being observed.
Update
7 Sep 23:23:52
We have disabled all of our LONAP ports for the moment - this reduces our capacity somewhat, but at this time of day the impact to customers is low. We've seen unconfirmed reports that there is some sort of problem with the LONAP peering network, we are still investigating ourselves. (LONAP is a peering exchange in London which connects up lots of ISPs and large internet companies, it's one of the main ways we connect to the rest of the Internet).
Update
7 Sep 23:24:21
LONAP engineers are looking in to this.
Update
7 Sep 23:31:09
We are now not seeing packet loss on the LONAP network - we'll enable our sessions after getting an 'all-clear' update from LONAP staff.
Update
7 Sep 23:39:44
Packet loss on the LONAP network has returned, we still have our sessions down, we're still waiting for the all-clear from LONAP before we enable our sessions again. Customers are on the whole, unaffected by this. There are reports of high latency spikes to certain places, which may or may not be related to what is happening with LONAP at the moment.
Update
8 Sep 06:57:44
We have re-enabled our LONAP sessions.
Resolution The LONAP peering exchange confirm that they had some sort of network problem which was resolved at around 1AM. It's unconfirmed, but the problem looks to be related to some sort of network loop.
Broadband Users Affected 100%
Started 7 Sep 22:42:00 by AA Staff
Closed 8 Sep 01:00:00
Previously expected 8 Sep 03:01:52 (Last Estimated Resolution Time from AAISP)

7 Sep 22:57:32
Details
Posted: 1 Sep 13:27:48
The support wiki will be unavailable from approx 2200 on 2017-09-07 to 0100 on 2017-09-18 whilst it is moved to a new hypervisor.
Update
7 Sep 22:01:38
The maintenance window has started, and the support pages will be unavailable until this work has completed. Further updates will be posted once progress has been made.
Update
7 Sep 22:58:06
The maintenance window is now complete. The VM is on the new hypervisor and the support wiki is back online.
Started 7 Sep 22:00:00 by AA Staff
Closed 7 Sep 22:57:32
Cause AAISP
Previously expected 8 Sep 01:00:00 (Last Estimated Resolution Time from AAISP)

6 Sep 13:11:00
Details
Posted: 6 Sep 09:43:53
From 10AM today there will be a brief period where this status page and our web-based IRC client servers will be unavailable. This is due to the underlying hardware having its RAM changed. This server is hosted in Amsterdam off our network and is maintained by a third party who are carrying out this work.
Resolution This work has been completed.
Started 6 Sep 10:00:00
Closed 6 Sep 13:11:00

5 Sep 14:30:00
Details
Posted: 5 Sep 12:01:04
We are seeing very high latency - over 1,000ms on many lines in the East of England. Typically around the Cambridgeshire/Suffolk area. This is affecting BT circuits, TalkTalk circuits are OK. We are investigating further and contacting BT. We suspect this is a failed link within the BT network in the Cambridge area. More details to follow shortly.
Update
5 Sep 12:28:00

Example line graph.
Update
5 Sep 12:35:38
We're currently awaiting a response from BT regarding this.
Update
5 Sep 12:37:16
BT are now actively investigating the fault.
Update
5 Sep 14:04:16
As expected, this is affecting other ISPs who use BT backhaul.
Update
5 Sep 14:23:02
Latest update from BT:- "The Transmission group are investigating further they are carrying out tests on network nodes, As soon as they have identified an issue we will advise you further. We apologies for any inconvenience caused while testing is carried out."
Update
5 Sep 14:34:35
Latency is now back to normal. We will post again when we hear back from BT.
Resolution BT have confirmed that a card in one of their routers was replaced yesterday to resolve this.
Started 5 Sep 11:00:00
Closed 5 Sep 14:30:00

4 Sep 17:23:17
Details
Posted: 4 Sep 17:22:46

We have a number of tariff changes planned, after a lot of interesting comments from my blog post - thank you all.

Some things are simple, and we are able to do sooner rather than later, like the extra 50GB already announced. Some will not be until mid to late October as they depend on other factors. Some may take longer still.

To try and ensure we get improvements as quickly as possible for customers I am updating a news item on our web site with details as we go.

http://aa.net.uk/news-2017-tariffs.html

As you will see, we are testing a change to make top-up on Home::1/SoHo::1 not expire. We have the end of a period (full moon) in two days where we can see if code changes work as expected on a live customer line. If all goes well then later this week we can change the description on the web site and officially launch this change.

Do check that page for updates and new features we are adding as we go.

Update
6 Sep 13:57:18
We have made top-up on Home::1 and SoHo::1 not expire, continuing until you have used it all. This applies to any top-up purchased from now on.
Started 4 Sep 17:19:37

3 Sep 08:14:53
Details
Posted: 3 Sep 08:12:02

We have changed the monthly quota allowances on Home::1 and SoHo::1 today, increasing all of the sub terabyte rates by 50GB per month, without changing prices.

I.e. you now get 200GB for the previous price of 150GB, and 300GB for the previous price of 250GB.

Existing customers have had this additional amount added to their September quota.

Started 3 Sep 08:10:00

29 Aug 13:59:08
Details
Posted: 7 Jul 10:39:42

For the past few years we've been supplying the ZyXEL ZyXEL VMG1312-B10A router. This is being discontinued and we will start supplying its replacement, the ZyXEL VMG1312-B10D (note the subtle difference!)

The new router is smaller than the previous one and has a very similar feature set and web interface to the old one.

We are still working through our configuration process and are updating the Support site with documentation. We are hoping this model to resolve many of the niggles we have with the old one too.

Started 7 Jul 13:12:00

29 Aug 13:58:28
Details
Posted: 29 Sep 2016 16:06:57

We're looking for a new member of staff for our front line technical support team, and another to join our sales/order processing team here in Bracknell

Please do send an email to jobs@aa.net.uk for further information if you are interested.

Started 29 Sep 2016 16:00:00

14 Aug 13:57:23
[DNS, Email and Web Hosting] Incoming mail issues - Closed
Details
Posted: 14 Aug 11:48:45
A couple of our incoming mail servers have gone down due to a power issue in the datacentre. Our other mail servers have picked up the load however there would have been a delay in receiving mail during the fail over. Incoming mail should be fine now and we are investigating what caused the issue.
Started 14 Aug 11:10:38 by AA Staff
Closed 14 Aug 13:57:23

29 Aug 13:00:00
Details
Posted: 17 Jun 15:24:16
We've seen very slight packet loss on a number of TalkTalk connected lines this week in the evenings. This looks to be congestion, it's may show up on our CQM graphs as a few pixels of red at the top of the graph between 7pm and midnight. We have an incident open with TalkTalk. We moved traffic to our Telehouse interconnect on Friday afternoon and Friday evening looked to be better. This may mean that th econgestion is related to TalkTalk in Harbour Exchange, but it's a little too early to tell at the moment. We are monitoring this and will update again after the weekend.
Update
19 Jun 16:49:34

TalkTalk did some work on the Telehouse side of our interconnect on Friday as follows:

"The device AA connect into is a chassis with multiple cards and interfaces creating a virtual switch. The physical interface AA plugged into was changed to another physical interface. We suspect this interface to be faulty as when swapped to another it looks to have resolved the packet loss."

We will be testing both of our interconnects individually over the next couple of days.

Update
20 Jun 10:29:05
TalkTalk are doing some work on our Harbour Exchange side today. Much like the work they did on the Telehouse side, they are moving our port. This will not affect customers though.
Update
28 Jun 20:46:34

Sadly, we are still seeing very low levels of packetloss on some TalkTalk connected circuits in the evenings. We have raised this with TalkTalk today, they have investigated this afternoon and say: "Our Network team have been running packet captures at Telehouse North and replicated the packet loss. We have raised this into our vendor as a priority and are due an update tomorrow."

We'll keep this post updated.

Update
29 Jun 22:12:17

Update from TalkTalk regarding their investigations today:- Our engineering team have been working through this all day with the Vendor. I have nothing substantial for you just yet, I have been told I will receive a summary of today's events this evening but I expect the update to be largely "still under investigation". Either way I will review and fire an update over as soon as I receive it. Our Vendor are committing to a more meaningful update by midday tomorrow as they continue to work this overnight.

Update
1 Jul 09:39:48
Update from TT: Continued investigation with Juniper, additional PFE checks performed. Currently seeing the drops on both VC stacks at THN and Hex. JTAC have requested additional time to investigate the issue. They suspect they have an idea what the problem is, however they need to go through the data captures from today to confirm that it is a complete match. Actions Juniper - Review logs captured today, check with engineering. Some research time required, Juniper hope to have an update by CoB Monday. Discussions with engineering will be taking place during this time.
Update
2 Jul 21:19:57

Here is an example - the loss is quite small on individual lines, but as we are seeing this sort of loss on many circuits and the same time (evenings) it make this more severe. It's only due to to our constant monitoring that this gets picked up.

Update
3 Jul 21:47:31
Today's update from Talktalk: "JTAC [TT's vendor's support] have isolated the issue to one FPC [(Flexible PIC Concentrator] and now need Juniper Engineering to investigate further... unfortunately Engineering are US-based and have a public holiday which will potentially delay progress... Actions: Juniper - Review information by [TalkTalk] engineering – Review PRs - if this is a match to a known issue or it's new. Some research time required, Juniper hope to have an update by Thursday"
Update
7 Jul 08:41:26
Update from TalkTalk yesterday evening: "Investigations have identified a limitation when running a mix mode VC (EX4200’s and EX4550's), the VC cable runs at 16gbps rather than 32gbps (16gbps each way). This is why we are seeing slower than expected speeds between VC’s. Our engineering team are working with the vendor exploring a number of solutions."
Update
17 Jul 14:29:29

Saturday 15th and Sunday 16th evenings were a fair bit worse than previous evenings. On Saturday and Sunday evening we saw higher levels of packet loss (between 1% and 3% on many lines) and we also saw slow single TCP thread speeds much like we saw in April. We did contact TalkTalk over the weekend and this has been blamed on a faulty card that TalkTalk had on Thursday that was replaced but has caused traffic imbalance on this part of the network.

We expect things to improve but we will be closely monitoring this on Monday evening (17th) and will report back on Tuesday.

Update
22 Jul 20:23:24
TalkTalk are planning network hardware changes relating to this in the early hours of 1st August. Details here: https://aastatus.net/2414
Update
1 Aug 10:42:58
TalkTalk called us shortly after 9am to confirm that they had completed the work in Telehouse successfully. We will move traffic over to Telehouse later today and will be reporting back the outcome on this status post over the following days.
Update
3 Aug 11:23:55
TalkTalk confirmed that they have completed the work in Harbour Exchange successfully. Time will tell if these sets of major work have helped with the problems we've been seeing on the TalkTalk network; we will be reporting back the outcome on this status post early next week.
Update
10 Aug 16:39:30
The packetloss issue has been looking better since TalkTalk completed their work. We are still wanting to monitor this for another week or so before closing this incident.
Update
29 Aug 13:56:53
The service has been working well over the past few weeks. We'll close this incident now.
Started 14 Jun 15:00:00
Closed 29 Aug 13:00:00

24 Aug 13:25:29
[DNS, Email and Web Hosting] Roundcube outage - Closed
Details
Posted: 24 Aug 11:26:37
Roundcube is currently down as a result of an internal PEW which should not have been customer affecting. We expect it to be back within the hour and we will investigate why this service was affected.
Update
24 Aug 13:25:50
Webmail is currently fully functional.
Started 24 Aug 11:00:00 by AA Staff
Closed 24 Aug 13:25:29
Previously expected 24 Aug 12:30:00

14 Aug 09:14:59
Details
Posted: 11 Aug 18:44:38
We're needing to restart the 'e.gormless' LNS - this will cause PPP to drop for customers. Update to follow.
Update
11 Aug 18:46:19
Customer on this LNS should be logging back in - (if not already)
Update
11 Aug 19:00:27
There are still some lines left to log back in, but most are back now
Update
11 Aug 19:10:47
Most customers are back now.
Update
13 Aug 12:12:47
This happened again on Sunday morning, and again a restart was needed. The underlying problem is being investigated.
Resolution We have now identified the cause of the issue that impacted both "careless" and "e.gormless". There is a temporary fix in place now, which we expect to hold, and the permanent fix will be deployed on the next rolling update of LNSs.
Started 11 Aug 18:30:00
Closed 14 Aug 09:14:59

11 Aug 02:26:48
Details
Posted: 9 Aug 10:21:04
At 02:00 on Friday we will be performing planned maintenance on one of our cross-London fibres. We do not anticipate any service disruption, however any work on the core network should be considered at risk.
Update
11 Aug 02:01:43
The planned work window has now started.
Update
11 Aug 02:27:04
Planned works completed without any issues.
Started 11 Aug 02:00:00 by AA Staff
Closed 11 Aug 02:26:48
Previously expected 11 Aug 03:00:00 (Last Estimated Resolution Time from AAISP)

10 Aug 13:20:00
Details
Posted: 10 Aug 16:17:39
We have had a problem with our call recording and voicemail systems. This problem started on Wednesday afternoon and was fixed by 13:20 today. This has meant that some call recordings have been lost and there would have been times when callers would have heard silence when they reached voicemail.
Started 9 Aug 16:00:00
Closed 10 Aug 13:20:00

7 Aug 22:06:11
Details
Posted: 7 Aug 15:12:51
We've had two incidents of one of our L2TP LNSs locking up over the weekend and causing disruption to some L2TP connected customers. Therefore will be swapping over the hardware in the morning of Tuesday 8th August at around 6AM. At this time L2TP sessions will be dropped and then re-establish shortly after on the new hardware.
Resolution Cancelled! Following discussions with FireBrick developers we've decided not to swap the hardware in this case. The fault is likely to be software related and instead we've changed the LNSs configuration slightly and are working on adding extra debugging in to the software which will be loaded once that coding work has been completed, which should be in a couple of days time.
Started 8 Aug 06:00:00
Closed 7 Aug 22:06:11

4 Aug 03:42:00
Details
Posted: 2 Aug 16:44:03
Between 2am and 3am we will be making changes to the configuration of our core switches, this will be to aid in our diagnostics in regards to the MSO that occurred in July. We expect there to be a few short disruptions to routing and there may be a PPP drop or two for some customers during this window.
Update
4 Aug 02:01:23
This work is about to commence.
Update
4 Aug 02:40:56
We have four small jobs to do, the first has been completed without any disruption. We're moving the estimated completion time to 4AM though, so as to give us a bit more time.
Update
4 Aug 02:58:57
The second job has been completed without any disruption.
Update
4 Aug 03:21:34
The third job has been completed, it did cause some routing issues for around 10 minutes.
Resolution This work has been completed.
Started 4 Aug 02:00:00
Closed 4 Aug 03:42:00
Previously expected 4 Aug 03:00:00

1 Aug 17:00:00
Details
Posted: 27 Jul 14:28:32
We are moving our Web IRC client (https://webirc.aa.net.uk/) off our network to increase availability in the unlikely event of an MSO. This work will be carried out on Tuesday, and will be carried out during support hours so that staff are available to explain to anyone who is unable to connect to it.
Started 1 Aug 11:00:00
Closed 1 Aug 17:00:00
Previously expected 1 Aug 12:00:00

28 Jul 20:45:12
[DNS, Email and Web Hosting] SSL Certificates Updating - Info
Details
Posted: 10 Jul 18:12:06

We're updating SSL certificates for our email servers today. The old serial number is 12AD7B. The new serial number is 130CAB. Users who don't have the CAcert root certificate installed may see errors. This does not affect webmail or outgoing SMTP. Details on http://aa.net.uk/cacert.html

We have a new email proxy that should fix these problems; those affected by this can try setting their incoming mail server to mail.aa.net.uk (TLS only, no STARTTLS). Please note that this is not yet "launched" and is therefore not yet officially supported. More info here: https://aastatus.net/2407

Started 10 Jul 15:00:00

28 Jul 20:44:10
Details
Posted: 13 Jul 16:37:48

Next week we will be making a change to our incoming email servers which is aimed at reducing the amount of spam email from reaching customer's mailboxes.

Historically, we've been purposefully cautious of rejecting email outright and prefer the method of marking messages as spam based on a 'spam score'. Customers have options as to what score to mark and to reject messages. However, due to the high number of spam messages, the cost of scanning each message and the extremely low risk of false positives we're going to introduce rejecting messages from IP addresses that are known spammers.

Specifically, the change is to reject messages from email servers that are listed in the "Spamhaus" block lists. These lists contain IP addresses that are known to be spam senders or are compromised machines in some way. Spamhaus have "a long-held reputation as having a false positive rate so low as to be unmeasurable and insignificant".

Many mail servers around the world use these same block lists, but if you are in anyway concerned about this then please to get in touch with us.
Update
19 Jul 16:15:43
We are making these changes at the moment. (Wednesday afternoon). As described above, we're not expecting this to impact customers in a negative way.
Started 13 Jul 16:30:00

13 Jul 18:00:00
[Broadband] TT blip - Closed
Details
Posted: 13 Jul 11:21:37
We are investigating an issue with some TalkTalk lines that disconnected at 10:51 this morning, most have come back but there are about 20 that are still off line. We are chasing TalkTalk business.
Update
13 Jul 11:23:50
Latest update from TT..... We have just had further reports from other reseller are also experiencing mass amount of circuit drops at the similar time. This is currently being investigated by our NOC team and updates to follow after investigation.
Started 13 Jul 10:51:49 by AAISP Pro Active Monitoring Systems
Closed 13 Jul 18:00:00
Previously expected 13 Jul 15:19:49

27 Jul 23:38:13
Details
Posted: 27 Jul 23:22:55
We are doing routine maintenance of one of our email servers this evening and at the moment one of the servers is sulking and not allowing logins. Some customers may be seeing login error messages when trying to receive email this evening. We're taking the affected server out of the 'pool' and expect to solve this shortly.
Update
27 Jul 23:39:18
Fixed! Sorry for the disruption.
Started 27 Jul 22:30:00
Closed 27 Jul 23:38:13

20 Jul 12:34:33
Details
Posted: 20 Jul 12:20:48
We're investigating a routing problem that started a few minutes ago affecting broadband and general routing across our network.
Update
20 Jul 12:24:33
This is affecting our services between London and maidenhead, so is affecting some Ethernet circuits too.
Update
20 Jul 12:33:34
Things are getting back to normal now.
Resolution Unfortunately, this was caused by human error in a configuration on one of our core switches. The work was being done as part of our investigations to the problems we had last week, and this change was really not meant to cause any issue, except a mistake was made in the configuration. The change was rolled back and the process is being reviewed.
Started 20 Jul 12:15:00
Closed 20 Jul 12:34:33

19 Jul
Details
Posted: 7 Feb 14:32:32

We are seeing issues with IPv6 on a few VDSL cabinets serving our customers. There is no apparent geographical commonality amongst these, as far as we can tell.

Lines pass IPv4 fine, but only intermittently passing IPv6 TCP/UDP for brief amounts of time, usually 4 or so packets, before breaking. Customers have tried BT modem, Asus modem, and our supplied ZyXEL as a modem and router, no difference on any. We also lent them a FireBrick to do some traffic dumps.

Traffic captures at our end and the customer end show that the IPv6 TCP and UDP packets are leaving us but not reaching the customer. ICMP (eg pings) do work.

The first case was reported to us in August 2016, and it has taken a while to get to this point. Until very recently there was only a single reported case. Now that we have four cases we have a bit more information and are able to look at commonalities between them.

Of these circuits, two are serving customers via TalkTalk and two are serving customers via BT backhaul. So this isn't a "carrier network issue", as far as we can make out. The only thing that we can find that is common is that the cabinets are all ECI. (Actually - one of the BT connected customers has migrated to TalkTalk backhaul (still with us, using the same cabinet and phone line etc) and the IPv6 bug has also moved to the new circuit via TalkTalk as the backhaul provider)

We are working with senior TalkTalk engineers to try to perform a traffic capture at the exchange - at the point the traffic leaves TalkTalk equipment and is passed on to Openreach - this will show if the packets are making it that far and will help in pinning down the point at which packets are being lost. Understandably this requires TalkTalk engineers working out of hours to perform this traffic capture and we're currently waiting for when this will happen.

Update
2 Mar 11:14:48
Packet captures on an affected circuit carried out by TalkTalk have confirmed that this issue most likely lies in the Openreach network. Circuits that we have been made aware of are being pursued with both BT and TalkTalk for Openreach to make further investigations into the issue.
If you believe you may be affected please do contact support.
Update
17 Mar 09:44:00
Having had TalkTalk capture the traffic in the exchange, the next step is to capture traffic at the road-side cabinet. This is being progresses with Openreach and we hope this to happen 'soon'.
Update
29 Mar 09:52:52
We've received an update from BT advising that they have been able to replicate the missing IPv6 packets, this is believed to be a bug which they are pursuing with the vendor.

In the mean time they have also identified a fix which they are working to deploy. We're currently awaiting further details regarding this, and will update this post once further details become known.
Update
18 May 16:30:59
We've been informed that the fix for this issue is currently being tested with Openreach's supplier, but should be released to them on the 25th May. Once released to Openreach, they will then perform internal testing of this before deploying it to their network. We haven't been provided with any estimation of dates for the final deployment of this fix yet.
In the interim, we've had all known affected circuits on TalkTalk backhaul have a linecard swap at the cabinet performed as a workaround, which has restored IPv6 on all TT circuits known to be affected by this issue.
BT have come back to us suggesting that they too have a workaround, so we have requested that it is implemented on all known affected BT circuits to restore IPv6 to the customers known to have this issue on BT backhaul.
Resolution A fix was rolled out on the last week of June, re-testing with impacted customers has showed that IPv6 is functioning correctly on their lines again after Openreach have applied this fix.
Broadband Users Affected 0.05%
Started 7 Feb 09:00:00 by AA Staff
Closed 19 Jul