Order posts by limited to posts

Today 10:00:22
Details
2 Feb 10:10:46
We are seeing low level packet loss on BT lines connected to the Wapping exchange - approx 6pm to 11pm every night. Reported to BT...
Update
2 Feb 10:13:57
Here is an example graph:
Update
3 Feb 15:55:40
Thsi has been escalated further with BT
Update
4 Feb 10:27:37
Escalated further with BT, update due after lunch
Update
11 Feb 14:18:00
Still not fixed, we are arming yet another rocket to fire at BT smiley
Update
24 Feb 12:58:51
escalated further with BT, update due by the end of the day.
Update
Today 10:00:11
Again the last few users seeing packet loss will be moved onto another MSAN in the next few days.
Broadband Users Affected 0.09%
Started 2 Feb 10:09:12 by AAISP automated checking
Update expected Wednesday 11:00:14

Today 09:59:32
Details
20 Jan 12:53:37
We are seeing low level packet loss on some BT circuits connected to the EUSTON exchange, this has been raised with BT and as soon as we have an update we will post an update here.
Update
20 Jan 12:57:32
Here is an example graph:
Update
22 Jan 09:02:48
We are due an update on this one later this PM
Update
23 Jan 09:36:21
BT are chasing this and we are due an update at around 1:30PM.
Update
26 Jan 09:41:39
Work was done over night on the BT side to move load onto other parts of the network, we will check this again this evening and report back.
Update
27 Jan 10:33:05
We are still seeing lines with evening packet loss but BT don't appear to understand this and after spending the morning arguing with them they have agreed to investigate further. Update to follow.
Update
28 Jan 09:35:28
Update from BT due this PM
Update
29 Jan 10:33:57
Bt are again working on this but no further updates will be given until tomorrow morning
Update
3 Feb 16:19:06
This one has also been escalated further with BT
Update
4 Feb 10:18:11
BT have identified a fault within their network and we have been advised that an update will be given after lunch today
Update
11 Feb 14:16:56
Yet another rocket on it's way to BT
Update
24 Feb 12:59:20
escalated further with BT, update due by the end of the day.
Update
Today 09:59:19
STill waiting for BT to raise an emergency PEW, the PEW (planned engineering work) will sort the last few lines where we are seeing peak time packet loss)
Broadband Users Affected 0.07%
Started 10 Jan 12:51:26 by AAISP automated checking
Update expected Wednesday 10:59:23
Previously expected 21 Jan 16:51:26

Today 09:58:14
Details
09 Dec 2014 11:20:04
Some lines on the LOWER HOLLOWAY exchange are experiencing peak time packet loss. We have reported this to BT and they are investigating the issue.
Update
11 Dec 2014 10:46:42
BT have passed this to TSO for investigation. We are waiting for a further update.
Update
12 Dec 2014 14:23:56
BT's Tso are currently investigating the issue.
Update
16 Dec 2014 12:07:31
Other ISPs are seeing the same problem. The BT Capacity team are now looking in to this.
Update
17 Dec 2014 16:21:04
No update to report yet, we're still chasing BT...
Update
18 Dec 2014 11:09:46
The latest update from this morning is: "The BT capacity team have investigated and confirmed that the port is not being over utilized, tech services have been engaged and are currently investigating from their side."
Update
19 Dec 2014 15:47:47
BT are looking to move our affected circuits on to other ports.
Update
13 Jan 10:28:52
This is being escalated further with BT now, update to follow
Update
19 Jan 12:04:34
This has been raised as a new reference as the old one was closed. Update due by tomorrow AM
Update
20 Jan 12:07:53
BT will be checking this further this evening so we should have more of an update by tomorrow morning
Update
22 Jan 09:44:47
An update is due by the end of the day
Update
22 Jan 16:02:24
This has been escalated further with BT, update probably tomorrow now
Update
23 Jan 09:31:23
we are still waiting for a PEW to be relayed to us. BT will be chasing this for us later on in the day.
Update
26 Jan 09:46:03
BT are doing a 'test move' this evening where they will be moving a line onto another VLAN to see if that helps with the load, if that works then they will move the other affected lines onto this VLAN. Probably Wednesday night.
Update
26 Jan 10:37:45
there will be an SVLAN migration to resolve this issue on Wednesday 28th Jan.
Update
30 Jan 09:33:57
Network rearrangement is happening on Sunday so we will check again on Monday
Update
2 Feb 14:23:12
Network rearrangement was done at 2AM this morning, we will check for paclet loss and report back tomorrow.
Update
3 Feb 09:46:49
We are still seeing loss on a few lines - I am not at all happy that BT have not yet resolved this. A further escalation has been raised with BT and an update will follow shortly.
Update
4 Feb 10:39:03
Escalated futher with an update due at lunch time
Update
11 Feb 14:14:58
We are getting extremly irritated with BT on this one, it should not take this long to add extra capaity in the affected area. Rocket on it's way to them now ......
Update
24 Feb 12:59:54
escalated further with BT, update due by the end of the day.
Update
Today 09:57:59
We only have a few customers left showing peak time packet loss and for now the fix will be to move them onto another MSAN, I am hoping this will be done in the next few days. We really have been pushing BT hard on this and other areas where we are seeing congestion. I am please that there are now only a handful of affected customers left.
Update expected Wednesday 10:58:04
Previously expected 1 Feb 09:34:04 (Last Estimated Resolution Time from AAISP)

Today 06:17:18
Details
Today 06:17:18
Many of our customer broadband lines suffered a blip just after 2am. We're still investigating that, but it seems our RADIUS accounting got behind due to the high number of lines flapping. It looks like accounting has caught up now, but it will mean that we sent out some delayed notifications over night. This could result in, for example, line down/up notification emails delayed by several hours. The time stamp in the notification should show if this is the case.
Update was expected Today 09:08:32

Saturday
Details
26 Feb 18:07:10
We have a couple of minor changes to VoIP servers planned. At present we are testing these.

The changes are subtle but we hope will assist in working with customers with asterisk boxes. We have also updated the screen setting on Remote-Party-Id to reflect trust in the CLI.

The actual update is likely to be over the weekend after some testing tomorrow. Any issues, please let support know.

Update
27 Feb 13:57:41
Due to a number of other incidents today we have decided to delay testing of new features until next week with a roll out of the new features after that. Anyone wanting to test with asterisk, etc, please do let support know (ideally on irc).
Started Saturday
Expected close 5 Mar

19 Jan 16:08:37
Details
17 Jul 2014 10:08:44
Our email services can learn spam/non-spam messages. This feature is currently down for maintenance as we work on the back-end systems. This means that if you move email in to the various 'learn' folders they will stay there and will not be processed at the moment. For the moment, we advise customers not to use this feature. Will will post updates in the next week or so as we may well be changing how this feature works. This should not affect any spam scores etc, but do contact support if needed.
Update
29 Jul 2014 11:42:12
This project is still ongoing. This should not be causing too many problems though, as the spam checking system has many many other ways to determine if a message is spam or not. However, for now, if customers have email that is miss-classified by the spam checking system then please email the headers in to support and we can make some suggestions.
Update
19 Jan 16:08:37
We are working on rebuilding the spam learning system. We expect to make this live in the next couple of weeks.
Started 17 Jul 2014 10:00:00
Update was expected 29 Jan 13:00:00

8 Jan 12:39:06
Details
8 Jan 12:49:24
We're going to remove the legacy fb6000.cgi page that was originally used to display CQM graphs on the control pages. This does not affect people who use the control pages as normal, but we've noticed that fb6000.cgi URLs are still being accessed occasionally. This is probably because the old page is being used to embed graphs into people's intranet sites, for example, but accessing graph URLs via fb6000.cgi has been deprecated for a long time. The supported method for obtaining graphs via automated means is via the "info" command on our API: http://aa.net.uk/support-chaos.html This is likely to affect only a handful of customers but, if you believe you're affected and require help with accessing the API, please contact support. We will remove the old page after a week (on 2015-01-15).
Update
9 Jan 08:52:28
We'll be coming up with some working examples of using our CHAOS API to get graphs, we'll post an update here today or monday.
Update
12 Jan 16:19:58
We have an example here: https://wiki.aa.net.uk/CHAOS
Started 8 Jan 12:39:06 by AA Staff
Previously expected 15 Jan 17:00:00

18 Aug 2014 10:00:00
Details
18 Aug 2014 10:48:39

Our legacy 'C' VoIP platform will be removed from service on March 2nd 2015.

This platform is now old, tired and we have a better VoIP platform: our FireBrick-based 'Voiceless' platform.

We have created a wiki page with details for customers needing to move platforms: http://wiki.aa.org.uk/VoIP_-_Moving_Platform

We will be contacting customers individually by email later in the year, but we'd recommend that customers start moving now. The wiki page above explains how to move, and in most cases it is simply changing the server details in your VoIP device. Please do contact Support for help though.

Started 18 Aug 2014 10:00:00 by AAISP Staff
Update was expected Today 11:00:00
Previously expected Today 10:00:00

03 Jun 2014 17:00:00
Details
03 Jun 2014 18:20:39
The router upgrades went well, and now there is a new factory release we'll be doing some rolling upgrades over the next few days. Should be minimal disruption.
Update
03 Jun 2014 18:47:21
First batch of updates done.
Started 03 Jun 2014 17:00:00
Previously expected 07 Jun 2014

14 Apr 2014
Details
13 Apr 2014 17:29:53
We handle SMS, both outgoing from customers, and incoming via various carriers, and we are now linking in once again to SMS with mobile voice SIM cards. The original code for this is getting a tad worn out, so we are working on a new system. It will have ingress gateways for the various ways SMS can arrive at us, core SMS routing, and then output gateways for the ways we can send on SMS. The plan is to convert all SMS to/from standard GSM 03.40 TPDUs. This is a tad technical I know, but it will mean that we have a common format internally. This will not be easy as there are a lot of character set conversion issues, and multiple TPDUs where concatenation of texts is used. The upshot for us is a more consistent and maintainable platform. The benefit for customers is more ways to submit and receive text messages, including using 17094009 to make an ETSI in-band modem text call from suitable equipment (we think gigasets do this). It also means customers will be able to send/receive texts in a raw GSM 03.40 TPDU format, which will be of use to some customers. It also makes it easier for us to add other formats later. There will be some changes to the existing interfaces over time, but we want to keep these to a minimum, obviously.
Update
21 Apr 2014 16:27:23

Work is going well on this, and we hope to switch Mobile Originated texting (i.e. texts from the SIP2SIM) over to the new system this week. If that goes to plan we can move some of the other ingress texting over to the new system one by one.

We'll be updating documentation at the same time.

The new system should be a lot more maintainable. We have a number of open tickets with the mobile carrier and other operators to try and improve the functionality of texting to/from us. These cover things like correct handling of multi-part texts, and correct character set coding.

The plan is ultimately to have full UTF-8 unicode support on all texts, but that could take a while. It seems telcos like to mess with things rather than giving us a clean GSM TPDU for texts. All good fun.

Update
22 Apr 2014 08:51:09
We have updated the web site documentation on this to the new system, but this is not fully in use yet. Hopefully this week we have it all switched over. Right now we have removed some features from documenation (such as delivery reports), but we plan to have these re-instated soon once we have the new system handling them sensibly.
Update
22 Apr 2014 09:50:44
MO texts from SIP2SIM are now using the new system - please let support know of any issues.
Update
22 Apr 2014 12:32:07
Texts from Three are now working to ALL of our 01, 02, and 03 numbers. These are delivered by email, http, or direct to SIP2SIM depending on the configuration on our control pages.
Update
23 Apr 2014 09:23:20
We have switched over one of our incoming SMS gateways to the new system now. So most messages coming from outside will use this. Any issues, please let support know ASAP.
Update
25 Apr 2014 10:29:50
We are currently running all SMS via the new platform - we expect there to be more work still to be done, but it should be operating as per the current documentation now. Please let support know of any issues.
Update
26 Apr 2014 13:27:37
We have switched the DNS to point SMS to the new servers running the new system. Any issues, please let support know.
Started 14 Apr 2014
Previously expected 01 May 2014

11 Apr 2014 15:50:28
Details
11 Apr 2014 15:53:42
There is a problem with the C server and it needs to be restarted again after the maintenance yesterday evening. We are going to do this at 17:00 as we need it to be done as soon as possible. Sorry for the short notice.
Started 11 Apr 2014 15:50:28

07 Apr 2014 13:45:09
Details
07 Apr 2014 13:52:31
We will be carrying out some maintenance on our 'C' SIP server outside office hours. It will cause disruption to calls, but is likely only to last a couple of minutes and will only affect calls on the A and C servers. It will not affect calls on our "voiceless" SIP platform or SIP2SIM. We will do this on Thursday evening at around 22:30. Please contact support if you have any questions.
Update
10 Apr 2014 23:19:59
Completed earlier this evening.
Started 07 Apr 2014 13:45:09
Previously expected 10 Apr 2014 22:45:00

25 Sep 2013
Details
18 Sep 2013 16:32:41
We have received notification that Three's network team will be carrying out maintenance on one of the nodes that routes our data SIM traffic between 00:00 and 06:00 on Weds 25th September. Some customers may notice a momentary drop in connections during this time as any SIMs using that route will disconnect when the link is shut down. Any affected SIMs will automatically take an alternate route when they try and reconnect. Unfortunately, we have no control over the timing of this as it is dependent on the retry strategy of your devices. During the window, the affected node will be offline therefore SIM connectivity should be considered at risk throughout.
Started 25 Sep 2013

Details
12 Feb 15:57:26
We have received the below PEW notification from one of our carriers that we take voice SIMS from. We have been advised by one of our layer 2 suppliers of emergency works to upgrade firmware on their routers to ensure ongoing stability. This will cause short drops in routing during the following periods: 00:00 to 06:00 Fri 13th Feb 00:00 to 04:00 Mon 16th Feb Although traffic should automatically reroute within our core as part of our MPLS infrastructure, some partners may experience disruption to SIM connectivity due to the short heartbeats used on SIM sessions.

Saturday 14:11:01
Details
27 Feb 12:30:57
We're investigating a problem with VoIP audio problems, calls breaking up etc. This looks to be some packetloss somewhere between us and our carriers. We'll update this post again shortly.
Update
27 Feb 13:04:21
We have identified the cause of this packetloss and are looking in to fixing it.
Update
27 Feb 14:55:30
We're working closely with a 3rd party that is involved in a BGP traffic problem between us and them. This is taking longer to get to the bottom of that we first thought.
Update
27 Feb 15:22:14
As we and the other BGP peer have not been able to get to the root cause of the problem we have put in a temporary fix. This has brought traffic levels back down to normal.
Update
27 Feb 16:26:46
Surprisingly, the problem has come back even though peering has been disabled! Needless to say, we are investigating again!
Update
27 Feb 17:00:05
The problem has gone away again whilst it was being looked in to.
Update
27 Feb 17:04:04

It's worth us explaining the problem... We have a peer at LINX that is sending us lots of traffic. This traffic is not for us, but for someone completely different - a different country even. Even through we have stopped the peering to this 3rd party, the traffic is still being sent, intermittently. This is causing our links to be filled, and hence causes packet loss.

We have been in direct contact with the 3rd party all afternoon, and we and they are confused as to how this is happening. At the point in time, we suspect some kind of router memory corruption which is causing the router to send traffic to the wrong peer. This type of problem is difficult to prove, and so it is taking time to get to the bottom of it.

We are still in contact with the 3rd party and will work to resolve this with them.

Resolution We were able to stop the floods of traffic yesterday afternoon, as a temporary measure, but the underlying problem remained until 10am Saturday when the LINX facing card at the peer was reset after the issue was reported by other LINX members. It is a shame that this was not done yesterday. This does confirm that it was to just AAISP that was affected by this. We will be working on contingency plans to allow us to react more efficiently for something like this in future. Thank you all for your understanding.
Started 27 Feb 12:15:00
Closed Saturday 14:11:01

27 Feb 09:51:04
[Email and Web Hosting] - Problems with our webserver - Closed
Details
27 Feb 09:06:37
We are investigating a problem with some of our web hosting customers with a .co.uk domain seeing a 404 Page not found message. Engineers are investigating this now.
Update
27 Feb 09:27:46
We are restoring missing files from our nightly backup, these should be restored in the next hour or so. We're not sure yet what has caused some websites to be removed. We are still investigating and will update this post shortly.
Update
27 Feb 09:33:21
This looks to have been caused by one of our nightly jobs that tidies web site files of ceased domains. This script looks to have been over zealous! We do apologise.
Resolution Missing web sites have been restored. We are aware of what caused this problem - our nightly maintenance scripts, which we will be fixing! Again, we do apologise for this.
Started 27 Feb 09:05:19
Closed 27 Feb 09:51:04

24 Feb 13:00:22
Details
2 Feb 11:21:40

Below is a list of exchanges that BT plan to WBC enable between April - June this year.

This is for information only and we will attempt to bulk migrate customers to the 21CN network as and when they are enabled.

ABERCHIRDER
ANCRUM
ANGLE
ANSTEY MILLS
ASHBURY
AVEBURY
BARLASTON
BEAL
BERRIEW
BILLESDON
BIXTER
BOBBINGTON
BODFARI
BRENT KNOLL
BRETTON
BROMESBERROW
BUCKLAND NEWTON
BULWICK
BURGH ON BAIN
BURRELTON
CAPUTH
CHOLESBURY
CHURCHSTANTON
CLANFIELD
COBERLEY
COLINSBURGH
CRANFORD
CREATON
CRONDALL
CRUCORNEY
CYNWYL ELFED
DALE
DINAS CROSS
DINAS MAWDDWY
DITTON PRIORS
DOLWEN
DUNECHT
DUNRAGIT
DURLEY
EARLDOMS
EAST HADDON
EAST MEON
EDDLESTON
FAYGATE
GAMLINGAY
GAYTON
GLAMIS
GLENLUCE
GREAT CHATWELL
GREENLAW
HAMNAVOE
HUXLEY
ILMINGTON
INNERWICK
KELSHALL
KINLET
KIRKCOLM
LANGHOLM
LANGTREE
LITTLE STEEPING
LLANDDAROG
LLANFAIRTALHAIARN
LLANFYLLIN
LLANNEFYDD
LYDFORD
LYONSHALL
MANORBIER
MICHELDEVER
MIDDLETON SCRIVEN
MIDDLETON STONEY
MILLAND
MILTON ABBOT
MUNSLOW
MUTHILL
NANTGLYN
NEWNHAM BRIDGE
NORTH CADBURY
NORTH CRAWLEY
NORTH MOLTON
NORTHWATERBRIDGE
OFFLEY
PARWICH
PENTREFOELAS
PUNCHESTON
SHEERING
SHEPHALL
STEBBING
STOKE GOLDINGTON
STOW
SUTTON VENY
TALYBONT ON USK
TEALBY
TWINSTEAD
UFFINGTON
WATERHOUSES
WITHERIDGE
WIVELSFIELD GREEN
WOBURN
LEWDOWN
ROMSLEY


24 Feb 12:57:31
Details
11 Feb 10:17:36
We are seeing evening congestion on the Wrexham exchange, also off that two other BRAS's are affected. They are: 21CN-BRAS-RED6-SF 21CN-BRAS-RED7-SF Customers can check which BRAS/exchange they are connected to from our control pages
Update
11 Feb 10:27:08
Here is an example graph:
Update
13 Feb 11:39:14
We are chasing BT for an update and as soon as we have further news we wil update this post.
Update
16 Feb 10:20:43
It looks like the peak time latency just went away Thursday evening with no report from BT that they actually changed something. We will continue monitoring for the next few days to ensure it really has gone away.
Broadband Users Affected 0.05%
Started 11 Feb 10:12:08 by AA Staff
Closed 24 Feb 12:57:31

13 Feb 15:31:16
Details
13 Feb 14:29:06
We currently supply the Technicolor TG582 for most ADSL services, but we are considering switching to a new router, the ZyXEL VMG1312-B

It is very comprehensive and does both ADSL and VDSL as well as bridging and wifi. It means we can have one router for all service types. As some of you may know, BT will be changing FTTC to be "wires only" next year, and so a VDSL router will be needed.

We have a small number available now for people to trial - we want to test the routers, our "standard config" and the provisioning process.

Please contact trial@aa.net.uk or #trial on the irc server for more information.

P.S. Obviously it does Internet Protocol, the current one, IPv6 and the old one IPv4

Obviously this initial trial is limited number of routers which we are sending out at no charge to try different scenarios. However, we expect to be shipping these as standard later in the month, and they will be available to purchase on the web site.

Update
13 Feb 15:49:08
Thanks for all the emails and IRC messages about trialling the new routers. We will contact customers back on Monday to arrange shipping of the units.
Update
16 Feb 10:43:54
We now have enough trialists for the new router, we will contact a selection of customers today to arrange delivery of the routers. Thanks
Started 13 Feb 14:25:34

15 Feb 12:24:53
Details
12 Feb 18:00:36
The Technicolor routers we supply have a factory default config which connects to us and operates a default setup. This is applied if someone uses the RESET button on the routers.

We have identified that a few dozen of these routers are in this state, which is not correct.

As part of work we are doing for some new routers we plan to start shipping soon, we have made it that any router logging in using the factory default will automatically be updated to have the correct config.

This will likely cause the graph to reset and the LNS in use depends on the login, but it will also set the WiFi SSID correctly, and other parameters. So users may see a change.

If you have any issues, do contact support.

Resolution The change in process has meant a number of routers have been auto-provisioned as expected, the remainder will when they next connect. Any issues, do contact support.
Started 12 Feb 17:00:00
Closed 15 Feb 12:24:53
Previously expected 14 Feb

5 Feb 13:07:52
Details
8 Jan 15:44:04
We are seeing some levels of congestion in the evening on the following exchanges: BT COWBRIDGE, BT MORRISTON, BT WEST (Bristol area), BT CARDIFF EMPIRE, BT THORNBURY, BT EASTON, BT WINTERBOURNE, BT FISHPONDS, BT LLANTWIT MAJOR. These have been reported to BT and they are currently investigating.
Update
8 Jan 15:56:59
He is an example graph:
Update
9 Jan 15:21:53
BT have been chased further on this as they have not provided an update as promised.
Update
9 Jan 16:19:48
We did not see any congestion over night on the affected circuits but we will continue monitoring all affected lines and post another update on Monday.
Update
12 Jan 10:37:32
We are still seeing congestion on the Exchanges listed above between the hours of 20:00hrs and 22:30hrs. We have updated BT and are awaiting their reply.
Update
20 Jan 12:52:05
We are now seeing congestion starting from 19:30 to 22:30 on these exchanges. We are awaiting an update from BT.
Update
21 Jan 11:13:44
BT have sent this into the TSO team, we are to await their investigation results. We will provide another update as soon as we have a reply.
Update
22 Jan 09:06:14
An update is expected on this tomorrow
Update
23 Jan 09:33:48
This one is still being investigated at the moment, and may need a card or fiber cable fitting, Will chase this for an update later on in the day.
Broadband Users Affected 0.30%
Started 8 Jan 15:40:15 by AA Staff
Closed 5 Feb 13:07:52

5 Feb 10:28:06
[Email and Web Hosting] - RoundCube webmail updated - Info
Details
5 Feb 10:28:06
We have updated the RoundCube webmail today. This is only a minor release so it should all look and work the same as before.
Started 5 Feb 10:00:00

4 Feb 10:55:10
Details
4 Feb 10:55:10
One of our carriers (AQL) will be doing some maintenance on their SMS platform Wednesday 11th February between 10:00 - 11:00. This is to load new firmware on some routers but no loss of service is expected. This is advisary only.
Started 4 Feb 10:51:46
Previously expected 12 Feb 11:00:00

3 Feb 21:56:57
Details
3 Feb 21:54:51

We have received a few reports from a customers about a popup window claiming to be from us, and encouraging the user to fill in a survey...

This is not from us and we have no connection with it. We wouldn't undertake this kind of activity. We have more information on this wiki page:

http://wiki.aa.org.uk/Mystery_Popups

Started 3 Feb 20:00:00

29 Jan 10:07:29
Details
27 Jan 11:48:41
We are currently seeing congestion in the evening between the hours of 8PM and 11PM on the following BRASs: BRAS 21CN-BRAS-RED4-CF-C, BRAS 21CN-ACC-ALN12-CF-C, BRAS 21CN-BRAS-RED8-CF-C. We have raised this into BT and their Estimated completion date is: 29-01-2015 11:23 We will update you as soon as we have some more information.
Update
27 Jan 12:02:35
Here is an example graph:
Resolution Nothing back from BT but we suspect they have increased capacity accorss the links, any further news on this we will update the post.
Started 27 Jan 11:44:56
Closed 29 Jan 10:07:29

28 Jan 20:00:00
Details
28 Jan 12:31:39
Due to the "GHOST" vulnerability, we are carrying out an emergency PEW of customer facing email and web hosting services throughout the day today. We will avoid down time where possible, but please consider this an "at risk" period for email and web hosting.
Resolution Servers have been updated.
Started 28 Jan 12:23:22
Closed 28 Jan 20:00:00
Previously expected 28 Jan 16:23:22

24 Jan 08:17:21
Details
23 Jan 08:35:26
In addition to all of the BT issues we have ongoing (and affecting all ISPs), we have seen some signs of congestion in the evening last night - this is due to planned switch upgrade work this morning. Normally we aim not to be the bottleneck, as you know, but we have moved customers on to half of our infrastructure to facilitate the switch change, and this puts us right on the limit for capacity at peak times. Over the next few nights we will be redistributing customers back on to the normal arrangement of three LNSs with one hot spare, and this will address the issue. Hopefully we have enough capacity freed up to avoid the issue tonight. Sorry for any inconvenience. Longer term we have more LNSs planned as we expand anyway.
Update
24 Jan 07:30:14
The congestion was worse last night, and the first stage of moving customers back to correct LNSs was done over night. We are completing this now (Saturday morning) to ensure no problems this evening.
Resolution Lines all moved to correct LNS so there should be no issues tonight.
Started 22 Jan
Closed 24 Jan 08:17:21
Previously expected 24 Jan 08:30:00

28 Jan 09:38:34
Details
4 Jan 09:45:22
We are seeing evening congestion on the Bristol North exchange, incident has been raised with BT and they are investigating.
Update
19 Jan 09:51:48
Here is an example graph:
Update
22 Jan 08:58:26
The fault has been escalated further and we are expected an update on this tomorrow.
Update
23 Jan 09:37:14
No Irams/Pew has been issued yet, and no further updates this morning. We are chasing BT. Update is expected around 1:30PM today.
Update
26 Jan 09:36:18
BT are due to update us on this after 3Pm today
Update
26 Jan 13:24:05
BT are looking to change the SFP port on the BRAS, we are chasing time scales on this now.
Update
26 Jan 14:16:43
This work will take place between 02:00 and 06:00 tomorrow morning
Update
27 Jan 09:23:21
Chasing BT to confirm the work was done over night, update to follow
Update
27 Jan 11:25:18
Nope, the work was postponed to this evening so we won't know whether they have fixed it until Wednesday evening. We will see.....
Update
28 Jan 09:38:34
Wow. Another BT congested link has been fixed over night.
Resolution BT changed the SFP port on the BRAS
Broadband Users Affected 0.01%
Started 4 Jan 09:45:22
Closed 28 Jan 09:38:34
Previously expected 29 Jan 13:23:24

28 Jan 09:25:23
Details
21 Jan 09:44:42
Our minitoring has picked up further congestion within the BT network causing high latency between 6pm-11pm every night on the following BRAS's. This is affecting BT lines only and in the Bristol and South/South west Wales areas. 21CN-BRAS-RED3-CF-C 21CN-BRAS-RED6-CF-C An incident has been raised with BT and we will update this post as and when we have updates.
Update
21 Jan 09:47:51
Here is an example graph:
Update
22 Jan 08:46:12
We are expecting a resolution on the tomorrow - 2015-01-23
Update
23 Jan 09:35:26
This one is still with the Adhara NOC team. They are trying to solve the congestion problems. Target resolution is today 23/1/15, we have no specific time frame so we will update you as soon as we have more information from BT.
Update
26 Jan 10:03:08
we are expecting an update on this later this afternoon
Update
26 Jan 16:23:32
BT are seeing some errors on slot 7 on one of the 7750’s, they looking to swap it over this evening, then they will monitor it, will update you once I get any further update.
Update
27 Jan 09:20:48
We are checking with BT whether or not a change was made over night.
Update
28 Jan 09:25:17
BT have actually cleared the congestion. We will monitor this very closely though.
Broadband Users Affected 0.03%
Started 4 Jan 18:00:00 by AA Staff
Closed 28 Jan 09:25:23
Previously expected 28 Jan 09:20:53

27 Jan 15:45:12
Details
27 Jan 13:45:04
There appears to be a problem with one of BT's BRAS's (21CN-BRAS-RED3-BM-TH) where customers are unable to connect. We are speaking to BT about this now and will update this post ASAP.
Update
27 Jan 13:52:57
BT 'tech services' are aware and dealing as we speak ....
Update
27 Jan 13:54:12
There are engineers on site already!
Update
27 Jan 15:45:36
WOW. BT have fixed their BRAS fault in record time. smiley
Broadband Users Affected 0.01%
Started 27 Jan 12:19:21
Closed 27 Jan 15:45:12
Previously expected 27 Jan 17:42:21

27 Jan 12:07:56
Details
27 Jan 11:39:35
We have identified a problem on our VOIP platform where some calls are not being recorded, engineers are working on this now. Update to follow.
Update
27 Jan 12:11:32
The problem's fixed. We don't think it actually affected any customers, only staff, but please do contact support if you notice any problems.
Started 27 Jan 10:37:49
Closed 27 Jan 12:07:56
Previously expected 27 Jan 15:37:49

23 Jan 06:47:57
Details
19 Jan 09:55:58
We are replacing one of our core switches in London on Friday from 6AM. This should not be service affecting, but should be considered an 'at risk' period. We'll update this post as this work is carried out.
Update
21 Jan 15:01:16
This has been rescheduled for Friday 6AM.
Update
23 Jan 06:23:05
Work on this is about to start.
Update
23 Jan 06:48:22
This work has been completed.
Started 23 Jan 06:00:00
Closed 23 Jan 06:47:57

22 Jan 09:48:14
Details
13 Jan 12:17:05
We are seeing low level packet loss on the Hunslet exchange (BT tails) this has been reported to BT. All of our BT tails connected to the Hunslet exchange are affected.
Update
13 Jan 12:27:11
Here is an example graph:
Update
15 Jan 11:50:15
Having chased BT up they have promised us an update by the end of play today.
Update
16 Jan 09:07:51
Bt have identified a card fault within their network. We are just waiting for conformation as to when it will be fixed.
Update
19 Jan 09:31:11
It appears this is now resolved - well BT have added extra capacity on the link: "To alleviate congestion on acc-aln2.ls-bas -10/1/1 the OSPF cost on the backhauls in area 8.7.92.17 to acc-aln1.bok and acc-aln1.hma have been temporarily adjusted to 4000 from 3000. This has brought traffic down by about 10 to 15 % - and should hopefully avoid the over utilisation during peak"
Resolution Work has been completed on the BT network to alleviate traffic
Broadband Users Affected 0.01%
Started 11 Jan 12:14:28 by AAISP Pro Active Monitoring Systems
Closed 22 Jan 09:48:14

19 Jan 12:01:39
Details
14 Jan 11:23:00
We have reported congestion effecting TT BERMONDSEY in the evenings starting from the 8th of Jan. Updates will follow when we have them. Thanks for your patience.
Update
14 Jan 11:28:46
Here is an example graph
Update
15 Jan 10:31:10
Talk Talk have now fixed the congestion issue and we are no longer seeing packet loss.
Started 8 Jan 11:20:05

19 Jan 11:28:57
Details
19 Dec 2014 09:44:48
Today the CVE-2014-9222 router vulnerability AKA 'misfortune cookie' has been announced at http://mis.fortunecook.ie/ This is reported to affect many broadband routers all over the world. The web page has further details.
We are contacting our suppliers for their take on this, we'll post follow-ups to this status post shortly.
It is also worth noting that at the time of writing CVE-2014-9222 is still 'reserved': http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-9222
Update
19 Dec 2014 09:52:28
Technicolor Routers:- These routers are not (yet?) on the list, we are awaiting a response from Technicolor regarding this.
Update: Technicolor say "We don’t use that webserver, so not impacted”"
Update
19 Dec 2014 09:59:46
ZyXEL P-660R-D1: This router is on the list. We are awaiting a response from ZyXEL though. We do already have this page regarding the web interface on ZyXELs: http://wiki.aa.org.uk/Router_-_ZyXEL_P660R-D1#Closing_WAN_HTTP and closing the Web server from the WAN may help with this vulnerability.
Update: The version of RomPager (the web server) on ZyXELs that we have been shipping for some time is 4.51. Allegedly versions before 4.34 are vulnerable, so they may not be vulnerable. You can tell the version with either:
wget -S IP.of.ZyXEL
or
curl --head IP.of.ZyXEL
Update 2015-01-07: P-660R-D1 Not affected: http://www.zyxel.com/support/ZyXEL_Helps_You_Guard_against_misfortune_cookie_vulnerability.shtml
Update
19 Dec 2014 10:00:57
Dlink 320B: We supply these in Bridge mode and therefore are not vulnerable.
Update
19 Dec 2014 10:02:38
FireBrick: Firebricks are not vulnerable.
Started 19 Dec 2014 09:00:00

16 Jan 09:16:49
Details
15 Jan 14:26:38
Just before 2PM today a number of TalkTalk circuits dropped out, it looks like they have all now come back on line. We are investigation this with TalkTalk
Update
15 Jan 14:54:39
TT are investigating further but initial response from them "nothing is presenting me with an obvious answer other than what appears to have been a connectivity problem to HEX" Update to follow
Update
16 Jan 09:17:35
Update from TT. Summary Network monitoring has identified that Wholesale Business customers across the country may have experienced a brief loss of service between 14:00 and 14:05. Network Monitoring completed by our NOC has identified that there was a drop of traffic at these times between a Network Core Router at the Brentford Data Centre (NCR002.BRE) and a Redback router (LTS001.HEX). This subsequently caused Wholesale Business customers who route through Harbour exchange to experience a loss of service.
Resolution Technical / Suspected Root Cause Investigations by our Network Support have identified that this issue occurred due to planned LTS work under SR8753. This work caused the queues on the LTS to fill up and caused a CPU spike, which in turn caused the tunnels to drop. There are plans to increase circuit capacity going forward to ease bandwidth levels and prevent a repeat of this issue.
Broadband Users Affected 1%
Started 15 Jan 14:24:54
Closed 16 Jan 09:16:49
Cause Carrier
Previously expected 15 Jan 18:24:54

15 Jan 22:00:56
Details
14 Jan 14:28:23

Snom have released a security announcement regarding their phones having number of vulnerabilities:
http://wiki.snom.com/8.7.5.15_OpenVPN_Security_Update

We advise customers to upgrade their firmware. We have a wiki page with more information on this:
http://wiki.aa.org.uk/SNOM_Firmware_Updates

We will be emailing customers that are using our VoIP service with a Snom phone shortly.

Update
15 Jan 22:00:56
Update 15th Jan 2015, 9pm: Snom have (yet again) updated their Security Update page. It is now stating that a firmware patch upgrade is not needed in most cases. We have advised customers to to upgrade, but it seems it is not as clear cut was it was earlier! We suggest people using Snom phones to review Snom's Security Update pages: http://wiki.snom.com/Security_update
Started 14 Jan 14:00:00

13 Jan 11:46:40
Details
18 Dec 2014 20:00:52
The working days between Christmas and New Year are "Christmas" rate, this means that any usage on 29th, 30th, and 31st December is not counted towards your Units allowance. As usual, bank holidays are treated as 'Weekend' rate.
(This doesn't apply to Home::1 or Office::1 customers.)
We wish all out customers a Merry Christmas!
Started 18 Dec 2014 19:00:00

11 Jan 11:36:06
Details
11 Jan 11:14:41
We may have to restart one or two of our LNSs in the early hours of Monday morning as there seems to be an issue with them. This is not affecting service at present but does mean we do not have any of the normal graphs which are essential to diagnostics and fault handling. Lines should reconnect promptly and ideally only be off for a few seconds, but, as usual, this very much depends on the routers and some lines make take a few minute to reconnect. There may be a second controlled PPP restart for some lines that end up on the wrong LNS once lines have stabilised.
Resolution False alarm, work cancelled.
Broadband Users Affected 66%
Started 12 Jan
Closed 11 Jan 11:36:06
Previously expected 12 Jan 07:00:00

10 Jan 20:00:00
Details
10 Jan 19:44:03
Since 19:20 we have seen issues on all TalkTalk backhaul lines. Investigating
Update
10 Jan 20:08:08
Looks to be recovering
Update
10 Jan 21:32:01
Most lines are up as of 8pm. We'll investigate the cause of this.
Started 10 Jan 19:20:00
Closed 10 Jan 20:00:00

10 Jan 15:35:00
Details
10 Jan 10:13:09
We are investigating a problem affecting some wholesale customers. This seems to be a data centre connectivity fault. Updatea to follow.
Update
10 Jan 10:23:40
This is related to a fault with Datahop,outside of our network. They are aware and investigating.
Update
10 Jan 15:37:12
Our Datahop ports are now back online. (Ironically, they came on line whilst waiting on hold for their NOC!). Wholesales lines are starting to reconnect. We suspect a faulty switch is to blame within Datahop.
Update
10 Jan 21:09:52
Datahop confirm this was caused by a faulty switch.
Started 10 Jan 09:20:00
Closed 10 Jan 15:35:00

8 Jan 12:51:58
Details
8 Jan 12:51:58
We are looking for someone to join our Technical Support team in Bracknell, info here: aa.net.uk/job.pdf
Started 8 Jan 12:50:00

17 Dec 2014 11:45:55
Details
17 Dec 2014 11:09:30
We are looking in to a problem on secondary-dns.co.uk at the moment as it is requesting transfers from the 'wrong' IP address. We hope to 'resolve' this soon.
Resolution This has been fixed, zones are catching up now. Sorry for the inconvenience caused.
Started 17 Dec 2014 09:00:00
Closed 17 Dec 2014 11:45:55

15 Dec 2014 10:00:00
Details
11 Dec 2014 15:52:46
The mobile carrier (Three) have a problem affecting the activating and suspending of SIMs. They are aware of this and it is being worked on.
Update
15 Dec 2014 14:50:04
This is still open, we are chasing this with the carrier.
Resolution From the carrier: "Three have monitored the issue over the weekend and no further incidents of this have been reported. They have restarted the affected platform and all requests have been processed as normal. We're sorry for any inconvenience for the delay in processing the requests."
Started 10 Dec 2014 13:00:00
Closed 15 Dec 2014 10:00:00

12 Dec 2014 11:00:40
Details
11 Dec 2014 10:42:15
We are seeing some TT connected lines with packetloss starting at 9AM yesterday and today. The loss lasts until 10AM and then there continues a low amount of loss. We have reported this to TalkTalk
Update
11 Dec 2014 10:46:34
This is the pattern of loss we are seeing:
Update
12 Dec 2014 12:00:04
No loss has been seen on these lines today. We're still chasing TT for any update though.
Resolution The problem went away... TT were unable to find the cause.
Broadband Users Affected 7%
Started 11 Dec 2014 09:00:00
Closed 12 Dec 2014 11:00:40

11 Dec 2014 14:15:00
Details
11 Dec 2014 14:13:58
BT issue affecting SOHO AKA GERRARD STREET 21CN-ACC-ALN1-L-GER. we have reported to this BT and they are now investigating.
Update
11 Dec 2014 14:19:33
BT are investigating, however the circuits are mostly back online.
Started 11 Dec 2014 13:42:11 by AAISP Pro Active Monitoring Systems
Closed 11 Dec 2014 14:15:00
Previously expected 11 Dec 2014 18:13:11 (Last Estimated Resolution Time from AAISP)

04 Dec 2014 10:18:06
Details
21 Jul 2014 15:49:07
We now have a new official URL for our Status Pages: https://aastatus.net The reason for the change is to make the status pages completely independent of any AAISP infrastructure. They were already hosted on a server in Amsterdam out side of our network, and now the DNS is independent too. Anyone using status.aa.net.uk should update to use aastatus.net
Started 21 Jul 2014 15:45:00

04 Dec 2014 10:15:46
Details
04 Jul 2014 11:00:06
Just to update - we have the physical SIM cards now, and we have pricing agreed. They are not yet provisioned on the network and that will hopefully be start of next week at which point we'll be able to start selling them. Thank you all for your patience.
Started 04 Jul 2014
Previously expected 08 Jul 2014

02 Dec 2014 09:05:00
Details
01 Dec 2014 21:54:24
All FTTP circuits on Bradwell Abbey have packetloss. This started at about 23:45 on 30th November. This is affecting other ISPs too. BT did have an Incident open, but this has been closed. They restarted a line card last night, but it seems the problem has been since the card was restarted. We are chasing BT.
Example graph:
Update
01 Dec 2014 22:38:39
It has been a struggle to get the front line support and the Incident Desk at BT to accept that this is a problem. We have passed this on to our Account Manager and other contacts within BT in the hope of a speedy fix.
Update
02 Dec 2014 07:28:40
BT have tried doing something overnight, but the packetloss still exists at 7am 2nd December. Our monitoring shows:
  • packet loss it stops at 00:30
  • The lines go off between 04:20 and 06:00
  • The packet loss starts again at 6:00 when they come back onine.
We've passed this on to BT.
Update
02 Dec 2014 09:04:56
Since 7AM today, the lines have been OK... we will continue to monitor.
Started 30 Nov 2014 23:45:00
Closed 02 Dec 2014 09:05:00

03 Dec 2014 09:44:00
Details
27 Nov 2014 16:31:03
We are seeing what looks like congestion on the Walworth exchange. Customers will be experiencing high latency, packetloss and slow throughput in the evenings and weekends. We have reported this to TalkTalk.
Update
02 Dec 2014 09:39:27
Talk Talk are still investigating this issue.
Update
02 Dec 2014 12:22:04
The congestion issue has been discovered on Walworth Exchange and Talk Talk are in the process of traffic balancing.
Update
03 Dec 2014 10:30:14
Capacity has been increased and the exchange is looking much better now.
Started 27 Nov 2014 16:28:35
Closed 03 Dec 2014 09:44:00