Order posts by limited to posts

5 Aug
Details
5 May 22:22:42

TalkTalk is performing essential maintenance on its internal infrastructure. This work will happen in the early hours of the 8th May 2015. This work will mean that services may be lost for up to 1 hour between midnight and 6am. This is likely to affect the following exchanges:

  • FLAX BOURTON
  • REDCLIFFE
  • WEDMORE
  • Yatton
  • Didcot
  • CHURCHDOWN
  • Cheltenham
  • SHRIVENHAM
  • STRATTON ST MARGARET
  • Swindon
  • Blunsdon
  • Cirencester
  • CRICKLADE
  • HAYDON WICK
  • PURTON
  • BATHEASTON
  • BOX
  • BRADENSTOKE
  • Chippenham
  • CORSHAM
  • CHURCHILL
  • Clevedon
  • NAILSEA
  • WRINGTON
  • LONG ASHTON
  • PILL
  • Brimscombe
  • Dursley
  • NAILSWORTH
  • TETBURY
  • Calne
  • Devizes
  • HAWTHORN
  • Melksham
  • NORTH TROWBRIDGE
  • BANWELL
  • BLEADON
  • WINSCOMBE
  • Worle
  • Weston Super Mare
  • ABSON
  • Downend
  • FISHPONDS
  • SALTFORD
  • CHEW MAGNA
  • Midsomer Norton
  • Radstock
  • STRATTON-ON-FOSSE
  • TEMPLE CLOUD
  • TIMSBURY
  • STOKE BISHOP
  • West
  • Frome
  • MELLS
  • WARMINSTER
  • BRADFORD-ON-AVON
  • LIMPLEY STOKE
  • Trowbridge
  • Westbury
  • ALMONDSBURY
  • FILTON
  • HENBURY
  • AVONMOUTH
  • PILNING
  • Portishead
  • BEDMINSTER
  • BISHOPSWORTH
  • BITTON
  • KEYNSHAM
  • Kingswood
  • CHIPPING SODBURY
  • FALFIELD
  • WINTERBOURNE
  • WOTTON UNDER EDGE
  • FAIRFORD
  • LECHLADE
  • MALMESBURY
  • SOUTH CERNEY
  • TOOTHILL
  • WOOTTON BASSETT
  • COMBE DOWN
  • Kingsmead
  • Bristol North
  • WESTBURY-ON-TRYM
  • EASTON
  • EASTVILLE
  • South
  • WHITCHURCH
  • LAVINGTON
  • MARLBOROUGH
  • PEWSEY
  • WROUGHTON
  • WANBOROUGH
  • LULSGATE
  • Glastonbury
  • OAKHILL
  • Wells
  • BERKELEY
  • THORNBURY
We apologise for any inconvenience that these works may cause you
Planned start 5 Aug by TalkTalk

13 May 12:13:36
Details
5 May 10:21:17
BT have had trouble with this exchange (http://aastatus.net/2052) but we are now seeing evening congestion on TalkTalk connected lines. We have reported this and will update this post accordingly.
Update
5 May 12:38:47
TalkTalk have fixed a misconfiguration at their end. This should now be resolved. We'll check again tomorrow.
Update
7 May 08:55:48
Lines still seem to be congested, this is being looked in to by TalkTalk.
Update
13 May 12:13:36
Update from TT 'This should now properly be resolved. A faulty interface meant that we were running at 2/3 of capacity - only a few of your subscribers would have suffered last night hopefully but should now all be good.'
Started 5 May 10:00:00

4 May 21:52:34
Details
4 May 21:41:59
We are are seeing congestion on HORNDEAN and WATERLOOVILLE exchanges (Hampshire). This is usually noticeable in the evenings. This will be reported to BT, and we'll update this post with updates.
Started 4 May 21:38:34

13 Apr
Details
9 Apr 13:43:39

We have been advised by Three of works on their network which will involve rerouting traffic from one of their nodes via alternate paths in their core. Although connections should automatically reroute, there will be brief amounts of packet loss. As a result, some customers may experience dropped connections. Any device reconnecting will automatically be routed via a new path.

This only affects our data only SIMs.

Started 13 Apr

31 Mar 17:00:00
Details
31 Mar 16:24:26
Over the next few days we are working on some minor changes to the way we handle passwords on the control pages (clueless).

At these first stages you should see no impact, but there is a risk of issues, and we would ask anyone with problems logging in to control pages, changing passwords, or logging in to DSL, SIMs, etc, to let us know.

Each stage is being tested on our test system and then deployed, with the first stage expected to be updated tonight.

The final stage will mean a change on where passwords are visible, and the processes for issuing and changing passwords. We'll post more details closer to the time.

This is all part of ongoing work to improve security. Thank you for your understanding.

Update
2 Apr 15:53:01
The first stage seems to have gone well - our test/monitoring has been working well to help us check any anomalies and ensure consistency.

The next stage should be equally harmless as it means changing over various systems to use the new password hashes. We plan to work on this over Easter.

We will then go on to change the way passwords are issued when ordering and updated when customers wish to change them.

This work is all part of general review and update of security for passwords on our various systems. Thank you for your understanding.

Update
3 Apr 12:55:34
We are progressing with updates - the login to the control pages is now switched over to the new hashes - any issues, please let me know on irc, but all looks good from here.

The RADIUS logins have changed over as well, to use line based passwords (which are same as control pages login passwords at present). Again, please let us know any issues, but so far all looks well.

The next step later today will be a change to how you change the password on the control pages - this will move to the same system we use on the accounts pages - an emailed link that offers a new passwords via https. This is safer than plain text emailed passwords.

Once that is complete, we plan to update he way passwords are issued when ordering new services, which will hopefully be done later today.

There will then be more testing and cleaning up to be done later.

Update
4 Apr 06:53:08
The first side effect that has been noted is that passwords on the control pages are now case-sensitive. Sorry for any confusion this may have caused.
Update
4 Apr 11:19:58
We expect the work for there weekend to stop now - with more later in the week or next weekend. We are at a stage now that we need to provide some clear documentation on the different levels of passwords and what levels of protection are provided for these in our systems.
Update
5 Apr 13:06:22
We are going ahead with more of the work this weekend now, and expect to separate control page login passwords from Line/DSL login passwords today or tomorrow. We'll post more details once the work is complete. We are currently running tests on out tests systems now.
Update
5 Apr 13:50:44
We have now separated login passwords and line passwords.

Any issues, please let us know.

Started 31 Mar 17:00:00
Previously expected 7 Apr

12 Mar 09:48:01
Details
12 Mar 09:48:01
Our wiki at http://wiki.aa.org.uk/ will be down for a while today due to an internal PEW. Sorry for any inconvenience.

19 Jan 16:08:37
Details
17 Jul 2014 10:08:44
Our email services can learn spam/non-spam messages. This feature is currently down for maintenance as we work on the back-end systems. This means that if you move email in to the various 'learn' folders they will stay there and will not be processed at the moment. For the moment, we advise customers not to use this feature. Will will post updates in the next week or so as we may well be changing how this feature works. This should not affect any spam scores etc, but do contact support if needed.
Update
29 Jul 2014 11:42:12
This project is still ongoing. This should not be causing too many problems though, as the spam checking system has many many other ways to determine if a message is spam or not. However, for now, if customers have email that is miss-classified by the spam checking system then please email the headers in to support and we can make some suggestions.
Update
19 Jan 16:08:37
We are working on rebuilding the spam learning system. We expect to make this live in the next couple of weeks.
Started 17 Jul 2014 10:00:00
Update was expected 29 Jan 13:00:00

8 Jan 12:39:06
Details
8 Jan 12:49:24
We're going to remove the legacy fb6000.cgi page that was originally used to display CQM graphs on the control pages. This does not affect people who use the control pages as normal, but we've noticed that fb6000.cgi URLs are still being accessed occasionally. This is probably because the old page is being used to embed graphs into people's intranet sites, for example, but accessing graph URLs via fb6000.cgi has been deprecated for a long time. The supported method for obtaining graphs via automated means is via the "info" command on our API: http://aa.net.uk/support-chaos.html This is likely to affect only a handful of customers but, if you believe you're affected and require help with accessing the API, please contact support. We will remove the old page after a week (on 2015-01-15).
Update
9 Jan 08:52:28
We'll be coming up with some working examples of using our CHAOS API to get graphs, we'll post an update here today or monday.
Update
12 Jan 16:19:58
We have an example here: https://wiki.aa.net.uk/CHAOS
Started 8 Jan 12:39:06 by AA Staff
Previously expected 15 Jan 17:00:00

03 Jun 2014 17:00:00
Details
03 Jun 2014 18:20:39
The router upgrades went well, and now there is a new factory release we'll be doing some rolling upgrades over the next few days. Should be minimal disruption.
Update
03 Jun 2014 18:47:21
First batch of updates done.
Started 03 Jun 2014 17:00:00
Previously expected 07 Jun 2014

14 Apr 2014
Details
13 Apr 2014 17:29:53
We handle SMS, both outgoing from customers, and incoming via various carriers, and we are now linking in once again to SMS with mobile voice SIM cards. The original code for this is getting a tad worn out, so we are working on a new system. It will have ingress gateways for the various ways SMS can arrive at us, core SMS routing, and then output gateways for the ways we can send on SMS. The plan is to convert all SMS to/from standard GSM 03.40 TPDUs. This is a tad technical I know, but it will mean that we have a common format internally. This will not be easy as there are a lot of character set conversion issues, and multiple TPDUs where concatenation of texts is used. The upshot for us is a more consistent and maintainable platform. The benefit for customers is more ways to submit and receive text messages, including using 17094009 to make an ETSI in-band modem text call from suitable equipment (we think gigasets do this). It also means customers will be able to send/receive texts in a raw GSM 03.40 TPDU format, which will be of use to some customers. It also makes it easier for us to add other formats later. There will be some changes to the existing interfaces over time, but we want to keep these to a minimum, obviously.
Update
21 Apr 2014 16:27:23

Work is going well on this, and we hope to switch Mobile Originated texting (i.e. texts from the SIP2SIM) over to the new system this week. If that goes to plan we can move some of the other ingress texting over to the new system one by one.

We'll be updating documentation at the same time.

The new system should be a lot more maintainable. We have a number of open tickets with the mobile carrier and other operators to try and improve the functionality of texting to/from us. These cover things like correct handling of multi-part texts, and correct character set coding.

The plan is ultimately to have full UTF-8 unicode support on all texts, but that could take a while. It seems telcos like to mess with things rather than giving us a clean GSM TPDU for texts. All good fun.

Update
22 Apr 2014 08:51:09
We have updated the web site documentation on this to the new system, but this is not fully in use yet. Hopefully this week we have it all switched over. Right now we have removed some features from documenation (such as delivery reports), but we plan to have these re-instated soon once we have the new system handling them sensibly.
Update
22 Apr 2014 09:50:44
MO texts from SIP2SIM are now using the new system - please let support know of any issues.
Update
22 Apr 2014 12:32:07
Texts from Three are now working to ALL of our 01, 02, and 03 numbers. These are delivered by email, http, or direct to SIP2SIM depending on the configuration on our control pages.
Update
23 Apr 2014 09:23:20
We have switched over one of our incoming SMS gateways to the new system now. So most messages coming from outside will use this. Any issues, please let support know ASAP.
Update
25 Apr 2014 10:29:50
We are currently running all SMS via the new platform - we expect there to be more work still to be done, but it should be operating as per the current documentation now. Please let support know of any issues.
Update
26 Apr 2014 13:27:37
We have switched the DNS to point SMS to the new servers running the new system. Any issues, please let support know.
Started 14 Apr 2014
Previously expected 01 May 2014

11 Apr 2014 15:50:28
Details
11 Apr 2014 15:53:42
There is a problem with the C server and it needs to be restarted again after the maintenance yesterday evening. We are going to do this at 17:00 as we need it to be done as soon as possible. Sorry for the short notice.
Started 11 Apr 2014 15:50:28

07 Apr 2014 13:45:09
Details
07 Apr 2014 13:52:31
We will be carrying out some maintenance on our 'C' SIP server outside office hours. It will cause disruption to calls, but is likely only to last a couple of minutes and will only affect calls on the A and C servers. It will not affect calls on our "voiceless" SIP platform or SIP2SIM. We will do this on Thursday evening at around 22:30. Please contact support if you have any questions.
Update
10 Apr 2014 23:19:59
Completed earlier this evening.
Started 07 Apr 2014 13:45:09
Previously expected 10 Apr 2014 22:45:00

25 Sep 2013
Details
18 Sep 2013 16:32:41
We have received notification that Three's network team will be carrying out maintenance on one of the nodes that routes our data SIM traffic between 00:00 and 06:00 on Weds 25th September. Some customers may notice a momentary drop in connections during this time as any SIMs using that route will disconnect when the link is shut down. Any affected SIMs will automatically take an alternate route when they try and reconnect. Unfortunately, we have no control over the timing of this as it is dependent on the retry strategy of your devices. During the window, the affected node will be offline therefore SIM connectivity should be considered at risk throughout.
Started 25 Sep 2013

Details
12 Feb 15:57:26
We have received the below PEW notification from one of our carriers that we take voice SIMS from. We have been advised by one of our layer 2 suppliers of emergency works to upgrade firmware on their routers to ensure ongoing stability. This will cause short drops in routing during the following periods: 00:00 to 06:00 Fri 13th Feb 00:00 to 04:00 Mon 16th Feb Although traffic should automatically reroute within our core as part of our MPLS infrastructure, some partners may experience disruption to SIM connectivity due to the short heartbeats used on SIM sessions.

15 May 14:00:00
Details
15 May 15:29:53
We have had reports of some one way audio calls today. We have not quite got to the bottom of it, and it seems to have gone away. This happened earlier in the week when one carrier had network issues, but we are not convinced that this is the same effect.
Started 15 May 12:00:00
Closed 15 May 14:00:00

14 May 15:42:00
Details
14 May 15:42:00

We are please to inform customers that we are changing the entry-level router that we supply to our customers. From next week we'll be shipping the ZyXEL VMG1312 by default instead of the Technicolor.

Since around 2012 we have been providing the Technicolor TG852, which was the first consumer level router to support IPv6. With the advent of wires-only FTTC and the need for a more flexible and easy to use router we have been looking for a replacement. The ZyXEL VMG1312 is able to do ADSL, FTTC and PPPoE and is flexible enough to be used on most of our lines. We have been working with ZyXEL over the past few months to iron out bugs that we have found. There are still some bugs to be fixed and these are detailed on our Support site. The biggest bug is the lack of 1500 byte MTU when running in bridge mode, however, ZyXEL expect to have this fixed soon and in the meantime FTTC installations will be installed with the Openreach modem.

More information about the router: https://support.aa.net.uk/Category:ZyXEL_VMG1312

Started 14 May 13:12:00

13 May 12:16:45
Details
26 Mar 09:53:31

Over the past couple of weeks we have seen FTTC lines drop and reconnect with in increase in latency of around 15ms. This is seen on the monitoring graphs as a thicker blue line.

Upon first glance it looks as if interleaving has been enabled, but a line test shows that this is not the case.

We've been in contact with BT and it does look like BT are rolling out a new profile on to their Huawei DSLAMs in the local green cabinets. It has been expected that BT would be rolling out this new profile, but we didn't expect such an increase in latency.

The profile adds 'Physical Retransmission (ReTX) technology (G.INP / ITU G.998.4)' which helps with spikes of electromagnetic interference and can make lines more stable.

We would hope to have control over the enabling and disabling of this profile, but we don't. Line profiles with FTTC is managed by BT Openreach and are tricky for us and even BT Wholesale to get adjusted.

We're still discussing this with BT and will update this post with news as we have it.

Update
26 Mar 09:55:29
Update
26 Mar 10:48:37
This has been escalated to the head of fibre deployment within BT Wholesale and we are expecting an update by the end of the day.
Update
26 Mar 11:12:08
Further information about G.INP:
  • http://www.ispreview.co.uk/index.php/2015/01/bt-enables-physical-retransmission-g-inp-fttc-broadband-lines.html
  • http://www.thinkbroadband.com/news/6789-impulse-noise-protection-rolling-out-on-openreach-vdsl2.html
  • http://forum.kitz.co.uk/index.php?topic=15099.0
...among others.
Update
27 Mar 16:46:22
BT have asked us for further information which we have provided to them, we don't expect an update now until Monday
Update
9 Apr 14:26:19
This is still ongoing with BT Wholesale and BT Openreach.
Update
16 Apr 15:58:53
This has been escalated to a very senior level within BT and we are expecting a proper update in the next few days.
Update
24 Apr 13:01:45
We have just received the below back from BT on this

Following communications from a small number of BTWholesale FTTC comms providers regarding Openreach’s implementation of Retransmission and the identification of some your customers who have seen increased latency on some lines for some applications since retransmission was applied. Over the last 4 weeks I have been pushing Openreach to investigate,

feedback and provide answers and options related to this issue. As a result attached is a copy of a briefing from Openreach; sent to their CPs today, on how RetX works and what may have caused this increase latency.

This info is being briefed to all BTWholesale customers via our briefing on Saturday morning 25/4/15 but as you have contacted me direct I’m sending this direct as well as providing ann opportunity to participate in a trial.

Openreach have also advise me this afternoon that they intend to run a trial next week (w/c 25/4/15) on a small set of lines; where devices aren’t retransmission compatible in the upstream to see if changing certain parameters removes the latency and maintains the other benefits of retransmission. The exact date lines will be trialled has yet to be confirmed.

However, they have asked if I have any end users who would like to be included in this trail. To that end if you have particular lines you’d like to participate in this trial please can you provide the DN for the service by 17:00 on Monday 28th April so I can get them included.

This is a trial of a solution and should improve latency performance but there is a risk that there may be changes to the headline rate.

Update
5 May 22:28:25
Update to Trial here: https://aastatus.net/2127
Started 26 Mar 09:00:00
Closed 13 May 12:16:45

13 May 12:16:18
Details
12 May 12:50:17
One of our upstream VOIP providers are having some network issues (see below) this may be affecting some VOIP calls and 3G data SIMS.

Affected Systems: Connectivity to some aql core services Affected Customers: Some customers across aql core services Expected Resolution Time:

Dear Customers, We are aware of a connectivity issue in Leeds which engineers are investigating as their top priority. Voice customers may have seen calls drop Customers colocating in Leeds may be experiencing connectivity issues. Resiliency in some services is reduced.

Update
13 May 12:15:45
This issue has now been resolved and below is information from AQL relating to the issue.

It's never a good thing to have to write an email like this. aql operate a resilient core network with an agile edge, allowing changes to be made to accommodate the addition of customers without making any fundamental (or risky) changes to the core network.

Last week, we had an outage related to instability within our core network, due to what we believe is a vendor / O/S bug within some core switch fabric and had started works with the vendor to address the issue with as little impact as possible.

aql have strong procedures and processes to minimise operational and network impact by way of planning core works or changes to core network to have little or no disruption, to be performed during silent hours and to be announced well in advance.

In this instance, regrettably, a senior member of staff did not follow these procedures and that is now under investigation. Please be assured we take such matters seriously and will take all necessary measures to ensure that there cannot be a repeat incident.

As with all incidents, aql prepares a full RFO "Reason for Outage", within 48 hours of any incident. If you require a copy of this, please make contact with your account manager.

Both personally and on behalf of aql, my sincere apologies - You should just be able to rely on us to do our part and you can then concentrate on your business.

Closed 13 May 12:16:18

12 May 13:43:30
Details
10 May 15:04:57
We are making a number of minor changes to the billing system, so as to improve the way the bills are presented, and importantly to improve the quality of the code itself. The billing system has had to evolve over more than 18 years and is in need of some tidying.

We are not changing prices, and so bills should not actually change in total.

We are changing the order that things are presented, and in some cases the number of line items shown, to try and make the bills clearer.

There are a lot of different scenarios to test, and we aim to catch them all, but if anyone thinks there is a billing error, or simply something that looks confusing, please do let us know right away.

Rest assured that will work to correct any billing errors promptly if they do occur.

Update
12 May 13:42:36
Changes have gone well, and dry runs of next month's invoices show the amounts match up. We are pretty confident that there will not be any issues, but please do let us know of any problems with invoices.
Started 1 May
Expected close 2 Jun

12 May 13:41:47
Details
11 May 18:07:36
We have changed the way we apply minimum terms.

Instead of charging for all of the service to the end of the minimum term, we now charge an early termination fee for the period from the cease/migrate to the end of the term. This is a simple fee based on the tariff and line type. We are also scrapping the 30 day notice requirement.

Whilst the old system was simple, it did not fit OFCOM rules for the new migration system. We think the new system is equally simple, and saves customers money. As such the change has been introduced today.

More details here http://aa.net.uk/news-20150511-minterm.html

Started 11 May 18:00:00

23 Apr 2014 10:21:03
Details
01 Nov 2013 15:05:00
We have identified an issue that appears to be affecting some customers with FTTC modems. The issue is stupidly complex, and we are still trying to pin down the exact details. The symptoms appear to be that some packets are not passing correctly, some of the time.

Unfortunately one of the types of packet that refuses to pass correctly are FireBrick FB105 tunnel packets. This means customers relying on FB105 tunnels over FTTC are seeing issues.

The work around is to remove the ethernet lead to the modem and then reconnect it. This seems to fix the issue, at least until the next PPP restart. If you have remote access to a FireBrick, e.g. via WAN IP, and need to do this you can change the Ethernet port settings to force it to re-negotiate, and this has the same effect - this only works if directly connected to the FTTC modem as the fix does need the modem Ethernet to restart.

We are asking BT about this, and we are currently assuming this is a firmware issue on the BT FTTC modems.

We have confirmed that modems re-flashed with non-BT firmware do not have the same problem, though we don't usually recommend doing this as it is a BT modem and part of the service.

Update
04 Nov 2013 16:52:49
We have been working on getting more specific information regarding this, we hope to post an update tomorrow.
Update
05 Nov 2013 09:34:14
We have reproduced this problem by sending UDP packets using 'Scapy'. We are doing further testing today, and hope to write up a more detailed report about what we are seeing and what we have tested.
Update
05 Nov 2013 14:27:26
We have some quite good demonstrations of the problem now, and it looks like it will mess up most VPNs based on UDP. We can show how a whole range of UDP ports can be blacklisted by the modem somehow on the next PPP restart. It is crazy. We hope to post a little video of our testing shortly.
Update
05 Nov 2013 15:08:16
Here is an update/overview of the situation. (from http://revk.www.me.uk/2013/11/bt-huawei-fttc-modem-bug-breaking-vpns.html )

We have confirmed that the latest code in the BT FTTC modems appears to have a serious bug that is affecting almost anyone running any sort of VPN over FTTC.

Existing modems seem to be upgrading, presumably due to a roll out of new code in BT. An older modem that has not been on-line a while is fine. A re-flashed modem with non-BT firmware is fine. A working modem on the line for a while suddenly stopped working, presumably upgraded.

The bug appears to be that the modem manages to "blacklist" some UDP packets after a PPP restart.

If we send a number of UDP packets, using various UDP ports, then cause PPP to drop and reconnect, we then find that around 254 combinations of UDP IP/ports are now blacklisted. I.e. they no longer get sent on the line. Other packets are fine.

Sending 500 different packets, around 254 of them will not work again after the PPP restart. It is not actually the first or last 254 packets, some in the middle, but it seems to be 254 combinations. They work as much as you like before the PPP restart, and then never work after it.

We can send a batch of packets, wait 5 minutes, PPP restart, and still find that packets are now blacklisted. We have tried a wide range of ports, high and low, different src and dst ports, and so on - they are all affected.

The only way to "fix" it, is to disconnect the Ethernet port on the modem and reconnect. This does not even have to be long enough to drop PPP. Then it is fine until the next PPP restart. And yes, we have been running a load of scripts to systematically test this and reproduce the fault.

The problem is that a lot of VPNs use UDP and use the same set of ports for all of the packets, so if that combination is blacklisted by the modem the VPN stops after a PPP restart. The only way to fix it is manual intervention.

The modem is meant to be an Ethernet bridge. It should not know anything about PPP restarting or UDP packets and ports. It makes no sense that it would do this. We have tested swapping working and broken modems back and forth. We have tested with a variety of different equipment doing PPPoE and IP behind the modem.

BT are working on this, but it is a serious concern that this is being rolled out.
Update
12 Nov 2013 10:20:18
Work on this in still ongoing... We have tested this on a standard BT retail FTTC 'Infinity' line, and the problem cannot be reproduced. We suspect this is because when the PPP re-establishes a different IP address is allocated each time, and whatever is session tracking does not match the new connection.
Update
12 Nov 2013 11:08:17

Here is an update with some a more specific explanation as to what the problem we are seeing is:

On WBC FTTC, we can send a UDP packet inside the PPP and then drop the PPP a few seconds later. After the PPP re-establishes, UDP packets with the same source and destination IP and ports won't pass; they do not reach the LNS at the ISP.

Further to that, it's not just one src+dst IP and port tuple which is affected. We can send 254 UDP packets using different src+dest ports before we drop the PPP. After it comes back up, all 254 port combinations will fail. It is worth noting here that this cannot be reproduced on an FTTC service which allocates a dynamic IP which changes each time PPP re-established.

If we send more than 254 packets, only 254 will be broken and the others will work. It's not always the first 254 or last 254, the broken ones move around between tests.

So it sounds like the modem (or, less likely, something in the cab or exchange) is creating state table entries for packets it is passing which tie them to a particular PPP session, and then failing to flush the table when the PPP goes down.

This is a little crazy in the first place. It's a modem. It shouldn't even be aware that it's passing PPPoE frames, let along looking inside them to see that they are UDP.

This only happens when using an Openreach Huawei HG612 modem that we suspect has been recently remotely and automatically upgraded by Openreach in the past couple of months. Further - a HG612 modem with the 'unlocked' firmware does not have this problem. A HG612 modem that has probably not been automatically/remotely upgraded does not have this problem.

Side note: One theory is that the brokenness is actually happening in the street cab and not the modem. And that the new firmware in the modem which is triggering it has enabled 'link-state forwarding' on the modem's Ethernet interface.

Update
27 Nov 2013 10:09:42
This post has been a little quiet, but we are still working with BT/Openreach regarding this issue. We hope to have some more information to post in the next day or two.
Update
27 Nov 2013 10:10:13
We have also had reports from someone outside of AAISP reproducing this problem.
Update
27 Nov 2013 14:19:19
We have spent the morning with some nice chaps from Openreach and Huawei. We have demonstrated the problem and they were able to do traffic captures at various points on their side. Huawei HQ can now reproduce the problem and will investigate the problem further.
Update
28 Nov 2013 10:39:36
Adrian has posted about this on his blog: http://revk.www.me.uk/2013/11/bt-huawei-working-with-us.html
Update
13 Jan 2014 14:09:08
We are still chasing this with BT.
Update
03 Apr 2014 15:47:59
We have seen this affect SIP registrations (which use 5060 as the source and target)... Customers can contact us and we'll arrange a modem swap.
Update
23 Apr 2014 10:21:03
BT are in the process of testing an updated firmware for the modems with customers. Any customers affected by this can contact us and we can arrange a new modem to be sent out.
Update
7 May 22:56:52
Just a side note on this, we're seeing the same problem on the ZyXEL VMG1312 router which we are teting out and which uses the same chipset: info and updates here: https://support.aa.net.uk/VMG1312-Trial
Resolution BT are testing a fix in the lab and will deploy in due course, but this could take months. However, if any customers are adversely affected by this bug, please let us know and we can arrange for BT to send a replacement ECI modem instead of the Huawei modem. Thank you all for your patience.

--Update--
BT do have a new firmware that they are rolling out to the modems. So far it does seem to have fixed the fault and we have not heard of any other issues as of yet. If you do still have the issue, please reboot your modem, if the problem remains, please contact support@aa.net.uk and we will try and get the firmware rolled out to you.
Started 25 Oct 2013
Closed 23 Apr 2014 10:21:03

7 May 09:53:49
Details
09 Dec 2014 11:20:04
Some lines on the LOWER HOLLOWAY exchange are experiencing peak time packet loss. We have reported this to BT and they are investigating the issue.
Update
11 Dec 2014 10:46:42
BT have passed this to TSO for investigation. We are waiting for a further update.
Update
12 Dec 2014 14:23:56
BT's Tso are currently investigating the issue.
Update
16 Dec 2014 12:07:31
Other ISPs are seeing the same problem. The BT Capacity team are now looking in to this.
Update
17 Dec 2014 16:21:04
No update to report yet, we're still chasing BT...
Update
18 Dec 2014 11:09:46
The latest update from this morning is: "The BT capacity team have investigated and confirmed that the port is not being over utilized, tech services have been engaged and are currently investigating from their side."
Update
19 Dec 2014 15:47:47
BT are looking to move our affected circuits on to other ports.
Update
13 Jan 10:28:52
This is being escalated further with BT now, update to follow
Update
19 Jan 12:04:34
This has been raised as a new reference as the old one was closed. Update due by tomorrow AM
Update
20 Jan 12:07:53
BT will be checking this further this evening so we should have more of an update by tomorrow morning
Update
22 Jan 09:44:47
An update is due by the end of the day
Update
22 Jan 16:02:24
This has been escalated further with BT, update probably tomorrow now
Update
23 Jan 09:31:23
we are still waiting for a PEW to be relayed to us. BT will be chasing this for us later on in the day.
Update
26 Jan 09:46:03
BT are doing a 'test move' this evening where they will be moving a line onto another VLAN to see if that helps with the load, if that works then they will move the other affected lines onto this VLAN. Probably Wednesday night.
Update
26 Jan 10:37:45
there will be an SVLAN migration to resolve this issue on Wednesday 28th Jan.
Update
30 Jan 09:33:57
Network rearrangement is happening on Sunday so we will check again on Monday
Update
2 Feb 14:23:12
Network rearrangement was done at 2AM this morning, we will check for paclet loss and report back tomorrow.
Update
3 Feb 09:46:49
We are still seeing loss on a few lines - I am not at all happy that BT have not yet resolved this. A further escalation has been raised with BT and an update will follow shortly.
Update
4 Feb 10:39:03
Escalated futher with an update due at lunch time
Update
11 Feb 14:14:58
We are getting extremly irritated with BT on this one, it should not take this long to add extra capaity in the affected area. Rocket on it's way to them now ......
Update
24 Feb 12:59:54
escalated further with BT, update due by the end of the day.
Update
2 Mar 09:57:59
We only have a few customers left showing peak time packet loss and for now the fix will be to move them onto another MSAN, I am hoping this will be done in the next few days. We really have been pushing BT hard on this and other areas where we are seeing congestion. I am please that there are now only a handful of affected customers left.
Update
17 Mar 11:21:33
We have just put a boot up BT on this, update to follow.
Update
2 Apr 13:16:10
BT have still not fixed the fault so we have moved some of the affected circuits over to TalkTalk and I am pleased to say that we are not seieng loss on those lines. 100% this is a BT issue and I am struggling to understand why they have still not tracked the fault down.
Closed 7 May 09:53:49
Previously expected 1 Feb 09:34:04 (Last Estimated Resolution Time from AAISP)

7 May 09:52:18
Details
11 Mar 11:39:17
We are seeing some evening time congestion on all BT 21CN lines that connect through BRAS's 21CN-BRAS-RED1-MR-DH up to 21CN-BRAS-RED13-MR-DH I suspect one of the BT nodes is hitting limits some evenings as we don't see the higher latency every night. This has been reported into BT and we will update this past as soon as they respond back.
Update
11 Mar 11:44:06
Here is an example graph
Update
12 Mar 12:00:45
This has been escalated further to the BT networK guys and we can expect an update within the next few hours.
Update
17 Mar 15:41:18
Work was done on this overnight so I will check again tomorrow morming and post another update.
Update
18 Mar 11:38:25
The changes BT made over night have made a significant difference to the latency, still seeing it slightly higher than we would like so we will go back to then again.
Update
19 Mar 14:54:44
Unfortunately the latency has increased again so whatever BT did two nights ago has not really helped. We are chasing again now.
Update
23 Mar 14:07:53
BT have still not pinpointed the issue so it has been escalated further.
Update
27 Mar 13:03:38
Latency is hardly noticeable now but we are still chasing BT on sorting the actual issue, update will be mOnday now.
Update
30 Mar 10:04:14
BT have advised that they are aware of the congestion issue at Manchester, and the solution they have in place is to install some additional edge routers, they are already escalating on this to bring the date in early, currently the date is May. Obviously May is just not acceptable and we are doing all we can to get BT to bring this date forward.
Update
2 Apr 12:28:39
We have requested a further escalation within BT, the time scales they have given for a fix is just not acceptable.
Update
13 Apr 15:12:23
The last update from BT was 'is latency issue has been escalated to high level. BT TSO are currently working on a resolution and are hoping to move into the testing phase soon. We will keep you updated as we get more information' I am chasing for another update now.
Update
16 Apr 16:01:15
We are still chasing BT up on bringing the 'fix' forward. Hopefully we will have another response by the morning.
Update
21 Apr 13:25:21
The latest update from BT: We have identified a solution to the capacity issue identified and are looking to put in a solution this Friday night...
Update
24 Apr 15:25:51
BT have added more capacity on tehir network and last night the latency looked fine. We will review this again on Monday.
Started 11 Mar 01:35:37
Closed 7 May 09:52:18

6 May 14:03:19
Details
5 May 16:29:43
Our office internet is currently offline. We're arranging our backup connection at the moment.
Update
5 May 16:55:40
We're making progress on our backup link as well as investigating the cause of the main fault.
Update
5 May 18:57:51
We do apologise about our connectivity problems this afternoon. We have been running on our backup FTTC line, and we're investigating why the direct fibre is down.
Update
6 May 10:34:33
Our main internet connection is still down this morning. We have a number of BT engineers working on this at the moment. This does mean that we are running on our backup FTTC links and most things are working OK! Some telephone calls are a little temperamental, we'd appreciate customers using IRC or email where possible, see: http://aa.net.uk/kb-irc.html for details of connecting to IRC. We hope to be back to normal later this morning.
Update
6 May 11:57:53
BT have identified 2 faults with our fibre. They have fixed one of them (a kink in a fibre patch lead in Bracknell exchange) and are investigating the second (low light level on one leg).
Resolution We're all back to normal now!
Started 5 May 16:25:00
Closed 6 May 14:03:19

29 Apr 15:23:27
Details
29 Apr 14:43:36
A third of our BT lines bliped - this looks to be an issue with routing on one of our LNSs in to BT.
Update
29 Apr 14:50:18
Many lines are failing to reconnect properly, we are investigating this.
Update
29 Apr 14:57:42
Lines are connecting successfully now
Update
29 Apr 15:23:27
The bulk of lines are back onlne. There are a small number of lines that are still failing to reconnect. These are being looked in to.
Update
29 Apr 15:36:54
The remain lines are reconnecting successfully now.
Resolution I wanted to try and explain more about what happened today, but it is kind of tricky without saying "Something crazy in the routing to/from BT".

We did, in fact make a change - something was not working with our test LNS and a customer needed to connect. We spotted that, for some unknown reason, the routing used a static route internally instead of one announced by BGP, for just one of the four LNSs, and that on top of that the static route was wrong, hence the test LNS not working via that LNS. It made no sense, and as all three other LNSs were configured sensibly we changed the "A" LNS to be the same, after all, this is clearly a config that just worked and was no problem, or so it seemed.

Things went flappy, but we could not see why. It looks like BGP in to BT was flapping, so people connected and disconnected rather a lot. We returned the config and things seemed to be fixed for most people, but not quite all. This made no sense. Some people are connecting and going on line, and then falling off line.

The "fix" to that was to change the endpoint LNS IP address used by BT to an alias on the same LNS. We have done this in the past where BT have had a faulty link in a LAG. We wonder if this issue was "lurking" and the problem we created showed it up. This shows that there was definitely an issue in BT somehow as the fix should not have made any difference otherwise.

What is extra special is that this looks like it has happened before - the logs suggest the bodge of a static route was set up in 2008, and I have this vague recollection of a mystery flappiness like this which was never solved.

Obviously I do apologise for this, and having corrected the out of data static route this should not need touching again, but damn strange.

Started 29 Apr 14:38:00
Closed 29 Apr 15:23:27
Previously expected 29 Apr 14:50:00

27 Apr 14:57:49
Details
27 Apr 14:56:58
Previously Openreach had advised that they intend to run a trial starting today on a small set of lines; where devices aren’t retransmission compatible in the upstream to see if changing certain parameters removes the latency and maintains the other benefits of retransmission. They have now advised us that the trial start date has been put back by two weeks (no idea why)

So if you have an FTTC line that is affected by this then please drop an email to support and we can include it in the list of affected lines that we will get included in the trial.

Update
11 May 11:07:12
Openreach have advised they will start loading the new DLM profiles to lines on Tuesday morning as part of regular DLM runs. Customers that are on the trial will notice a loss of sync when the new profiles are updated.
Started 27 Apr 14:51:39

25 Apr 18:46:00
Details
25 Apr 18:48:19
There was an unexpected blip in routing - we are looking in to it.
Started 25 Apr 18:44:00
Closed 25 Apr 18:46:00
Previously expected 25 Apr 22:46:00

20 Apr 09:54:30
Details
20 Apr 09:54:30
Customers will have received an email from us. Apologies for not PGP signing it. It asks you to go to a secure link on our control pages and confirm (one click) that you consent to receive notices via email.

Yes, I know it is crazy, and it is already part of our terms, and you already know we email notices, and that this email is a notice we have emailed you... Sorry but OFCOM insist we get *explicit* consent to send some notices we send.

We'd appreciate it if you just click the link and then the confirm button.

We'll email you again if you don't, sorry. If you are not happy about this, please do complain to OFCOM. Thank you.

Update
21 Apr 18:23:01
We have resent the email to all of those that have not followed the link and confirmed. This time, PGP signed. Sorry for any concern the previous email caused.
Update
21 Apr 18:29:59
I'd also like to thank the *thousands* of people that have confirmed their consent so far.
Started 19 Apr
Expected close 1 Jun

17 Apr 15:54:16
Details
15 Apr 13:16:15
Some customers on the Bradwell Abbey exchange are currently experiencing an outage. We have received reports from FTTP customers, however this may also affect customers using other services. BT have advised that they are currently awaiting a delivery for a new card at this exchange. We will chase BT for updates and provide them as we receive them.
Update
15 Apr 15:47:41
I have requested a further update from BT.
Update
16 Apr 08:07:15
Openreach AOC and PTO are investigating further at this time. We will reach out for an update later today.
Update
16 Apr 10:32:55
BT have advised that a Cable down is the root cause at this time.
Update
16 Apr 15:51:50
PTO are still onsite. I have asked for an ECD, however OpenReach are not supplying that information, due to being fibre work.
Update
17 Apr 10:19:33
OpenReach have stated, they are hoping for a completion on the fibre today and resource is being tasked out. OpenReach have stated this is only an estimate and not set in stone.
Update
17 Apr 14:49:46
Some customers are reporting a restored service. BT advise that teams are still on site to resolve this P1 issue.
Update
17 Apr 15:55:25
The cable down issue affecting customers using the Bradwell Abbey exchange has now been resolved.
Started 15 Apr 12:55:00 by AAISP Staff
Closed 17 Apr 15:54:16
Cause BT

16 Apr 16:00:24
Details
27 Mar 14:03:52
We are seeing packet loss on all lines connected through 21cn-BRAS-RED8-SL the loss is all through the day/night started 10:08 on the 25th. This has been reported to BT
Update
27 Mar 14:07:22
Here is an example graph:
Update
30 Mar 14:37:04
BT claimed to have fixed this but our monotoring is still seeing the loss, BT chased further
Broadband Users Affected 0.01%
Closed 16 Apr 16:00:24

16 Apr 15:59:33
Details
2 Feb 10:10:46
We are seeing low level packet loss on BT lines connected to the Wapping exchange - approx 6pm to 11pm every night. Reported to BT...
Update
2 Feb 10:13:57
Here is an example graph:
Update
3 Feb 15:55:40
Thsi has been escalated further with BT
Update
4 Feb 10:27:37
Escalated further with BT, update due after lunch
Update
11 Feb 14:18:00
Still not fixed, we are arming yet another rocket to fire at BT smiley
Update
24 Feb 12:58:51
escalated further with BT, update due by the end of the day.
Update
2 Mar 10:00:11
Again the last few users seeing packet loss will be moved onto another MSAN in the next few days.
Update
12 Mar 12:02:57
Updatew expected in the next few hours
Update
17 Mar 11:19:48
A further escalation has been raised on this, update by the end of the day
Update
30 Mar 15:35:32
This has been escalated to the next level
Broadband Users Affected 0.09%
Started 2 Feb 10:09:12 by AAISP automated checking
Closed 16 Apr 15:59:33

13 Apr 15:01:38
Details
13 Apr 14:51:55
There was an issue with two of our routers - a few lines dropped, and are reconnecting. Routing was affected for a minute or two. We're investigating.
Resolution Service has recovered as expected. We'll see if we can find the underlying cause. Sorry for any inconvenience.
Started 13 Apr 14:46:46
Closed 13 Apr 15:01:38
Previously expected 13 Apr 14:50:00

7 May 08:33:30
Details
2 Apr 15:48:08
We expect to do some router upgrades, including normal rolling LNS upgrades over the next week as a new release of the FireBrick is expected to be released shortly. This should have little or no disruption, as usual.
Update
9 Apr 12:52:05
This was a bit delayed and should start tonight, and be ongoing in to the weekend.
Update
25 Apr 09:08:02
Further updates this weekend (25/26)
Started 3 Apr
Closed 7 May 08:33:30
Previously expected 1 May

2 Apr 16:02:22
Details
17 Mar 12:38:27
We are seeing higher than normal evening time latency on the Wrexham exchange, it is not every night but it does suggest BT are running another congested link. This has been reported to them and we will update thia as and when they get back to us.
Update
17 Mar 12:41:51
Here is an example graph:
Update
20 Mar 14:36:18
It has looked better the last two eveings but it's still being investigated as the BT links were probably less busy.
Broadband Users Affected 0.01%
Started 15 Mar 12:36:07 by AAISP Staff
Closed 2 Apr 16:02:22

2 Apr 11:57:32
Details
1 Apr 10:00:06
Some customers connected through Gloucestershire are affected by an ongoing TalkTalk major service outage. Details below: Summary

Network monitoring initially identified total loss of service to all customers connected to 2 exchanges in the Gloucester area. Our NOC engineers re-routed impacted traffic whilst virgin media engineers carried out preliminary investigations. Virgin media restoration work subsequently resulted in several major circuits in the Gloucester area to fail.

This has resulted in a variety of issues for multiple customers connected to multiple exchanges. Our NOC engineers have completed re-routing procedures to restore service for some customers with other customers continuing to experience total loss of service due to capacity limitations. Impact: Tigworth, Witcombe and Painswick exchanges

Hardwicke and Barnwood exchanges – experiencing congestion related issues.

Cheltenham and Churchdown – experiencing congestion related issues.

experiencing congestion related issues Stroud, Stonehouse, Whitecroft, Blakeney, Lydney, Bishops Cleeve, Winchcombe, Tewkesbury, Bredon exchanges.

Update
1 Apr 10:31:30
TT have advised that splicing of the affected fibre is still ongoing. There is no further progressive updates at this time. Further updates will be sent out shortly.
Update
2 Apr 11:57:26
Root cause analysis identified a major Virgin Media fibre break due to third party contractor damage as being the cause of this incident. Service was fully restored when Virgin Media Fibre engineers spliced new fibre. Following this we received confirmation that service had returned as BAU. TalkTalk customers would have been automatically rerouted and would have experienced only a momentary Loss of Service. An observation period has been carried out verifying network stability and as no further issues have been reported this incident will be closed with further investigations into the cause being tracked via the Problem Management process
Closed 2 Apr 11:57:32

1 Apr 09:01:39
Details
1 Apr 09:01:39

We have extended our support hours, which are now 8am to 6pm, Mon-Fri, except (English) public holidays.

Previously we worked 9am to 5pm, and sales/accounts still do. However the support staff can usually address simple/urgent queries in those areas if necessary.

Occasionally we do have people ask why we only work office hours, and it is worth trying to explain this. Many ISPs do, indeed, have 24 hour telephone support, for example.

For most of our services, there are faults that come in two flavours. Either there is some big issue (a major outage), in which case we have staff, getting involved in fixing things whatever time it is, or an individual line fault for DSL. It is pretty rare to have individual faults for VoIP, SIMs, etc, but you can, of course, get line faults for DSL.

When it comes to individual DSL faults, there are a load of things people can do at home/office to eliminate equipment and test for themselves, and we offer various on-line tests via our control pages.[1] This can help resolve things. But the issues that don't just go away, and would require support staff to do something, are almost always something that needs a BT engineer to go out.

With very few exceptions, BT engineers are not going to be going our any quicker if we book them next working day at 9am. So having support staff take calls in the middle of the night would not usually be any help. We also have no intention of farming support out to call centres following scripts.

However, we have decided to extend the hours a bit. The reasons being :-

  • BT engineers work 8am to 6pm normally, and so we can help address any issues that come up with an engineer visit, and talk to the engineer or our customer about it at the time. This has already been seen with some 8am visits by engineers who are confused by the notes and need us to explain.
  • Starting at 8am gives customers a chance to resolve issues that can be resolved by talking to support before the usual working day for most people. If it is a line fault, that is not much help, but if it is a matter of swap a router or reboot something, we can offer the necessary advice before you have an office full of people that cannot work.
  • Starting at 8am and finishing a 6pm allows a lot of people that work during the day to contact us from home where they have an issue with their home broadband. We know some customers appreciate that.
  • We have increased the number of support staff, allowing some staggered working hours so that we can offer this.
But please do bear in mind, we do have irc[2], with a simple web front end, which can offer various help and advice by staff and other customers at all sorts of times. It is informal support from staff outside normal hours, but is usually available. We are thinking of perhaps extending this to be more formal evening irc support at some point maybe, with a rota of some sort.

Obviously we're interested in feedback on how the new support hours work for customers.

Started 1 Apr 09:00:00

31 Mar 22:12:13
Details
31 Mar 22:05:19
The Control Pages are currently offline and affecting some other services, eg SIP2SIM registrations. This is being looked in to at the moment.
Update
31 Mar 22:12:32
Service is back to normal.
Started 31 Mar 21:50:00
Closed 31 Mar 22:12:13

27 Mar 09:00:00
Details
25 Mar 21:48:13
Since the 24th March we have been seeing congestion on TalkTalk lines on the Shepherds Bush exchange. This has been reported to TalkTalk. Example graph:
Update
26 Mar 10:51:48
TalkTalk say they have fixed this. We'll be checking overnight to be sure smiley
Update
26 Mar 22:27:33
Lines are looking good.
Resolution

We had this feedback from TalkTalk regarding this congestion issue:

Shepherds Bush has three GigE backhauls to two different BNG's - there was some a software process failure and restart on one of these devices on Tuesday morning which had two of the three backhauls homed to it. As a result all customers redialled to the one 'working' BNG in the exchange - normally when this happens we will calculate whether or not the backhaul can handle that number of customers and if not manually intervene, in this case however a secondary knock on issue meant that our DHCP based customers (FTTC subs) were sent through the same backhaul and the calculation was inaccurate.

If the PPP session was restarted they would have reconnected on their normal BNG and everything should be OK - we've just made this change manually moving subscribers over - still have a couple of lines on the backup BNG so will monitor if there are any issues and take any necessary actions to resolve.

Started 24 Mar 17:00:00
Closed 27 Mar 09:00:00

26 Mar 14:16:54
Details
4 Feb 10:55:10
One of our carriers (AQL) will be doing some maintenance on their SMS platform Wednesday 11th February between 10:00 - 11:00. This is to load new firmware on some routers but no loss of service is expected. This is advisary only.
Started 4 Feb 10:51:46
Previously expected 12 Feb 11:00:00

25 Mar 15:10:21
[Email and Web Hosting] - SSL Certificates Updated - Info
Details
25 Mar 15:08:57
We're updating SSL certificates for our email servers this afternoon, including webmail. The old serial number is 0FA016. The new serial number is 106E03. Users who don't have the CAcert root certificate installed may see errors. Details on http://aa.net.uk/cacert.html

23 Mar 10:38:00
Details
23 Mar 09:51:01
We are investigating a problem with some incoming calls, our engineers are busy investigating this now. Apologise for the inconvenience.
Update
23 Mar 10:01:55
We have raised this with the carrier in question, they are investigating.
Update
23 Mar 10:04:39
The carrier have confirmed that they have a fault.
Update
23 Mar 10:09:20
Customers can call us on:
  • Sales: 05555 400 000
  • Support 05555 400 999
for the time being.
Update
23 Mar 10:12:32
Carrier say "Our engineers are working on this as a top priority"
Update
23 Mar 10:33:16
Carrier says: "initial investigations point to memory issues and we are investigating that further. As a temporary solution we are attempting to update our internal routing to mitigate the impacts."
Update
23 Mar 10:37:35
Incoming calls seem to be working ok now.
Resolution Calls working again. Fault was caused by upstream carrier and affected may VoIP services across the UK. The carrier has provided a 'Reason for Outage' document: http://www.aql.com/downloads/RFO_20150323_aql.pdf
Closed 23 Mar 10:38:00

17 Mar 11:18:55
Details
20 Jan 12:53:37
We are seeing low level packet loss on some BT circuits connected to the EUSTON exchange, this has been raised with BT and as soon as we have an update we will post an update here.
Update
20 Jan 12:57:32
Here is an example graph:
Update
22 Jan 09:02:48
We are due an update on this one later this PM
Update
23 Jan 09:36:21
BT are chasing this and we are due an update at around 1:30PM.
Update
26 Jan 09:41:39
Work was done over night on the BT side to move load onto other parts of the network, we will check this again this evening and report back.
Update
27 Jan 10:33:05
We are still seeing lines with evening packet loss but BT don't appear to understand this and after spending the morning arguing with them they have agreed to investigate further. Update to follow.
Update
28 Jan 09:35:28
Update from BT due this PM
Update
29 Jan 10:33:57
Bt are again working on this but no further updates will be given until tomorrow morning
Update
3 Feb 16:19:06
This one has also been escalated further with BT
Update
4 Feb 10:18:11
BT have identified a fault within their network and we have been advised that an update will be given after lunch today
Update
11 Feb 14:16:56
Yet another rocket on it's way to BT
Update
24 Feb 12:59:20
escalated further with BT, update due by the end of the day.
Update
2 Mar 09:59:19
STill waiting for BT to raise an emergency PEW, the PEW (planned engineering work) will sort the last few lines where we are seeing peak time packet loss)
Update
12 Mar 12:03:57
I need to check this tonight as bT think it is fixed, I will post an update back tomorrow
Broadband Users Affected 0.07%
Started 10 Jan 12:51:26 by AAISP automated checking
Closed 17 Mar 11:18:55
Previously expected 21 Jan 16:51:26

17 Mar 10:08:05
Details
17 Mar 10:08:05

For the past year or so we have been offering 2 'CSS styles' for the Control Pages. Today we are changing the default style to the new one. This means that most customers will now be using the new style.

You can see the two styles on this page: http://wiki.aa.org.uk/CSS

Feedback is welcome, pop us an email to webmaster@aa.net.uk

Started 17 Mar 09:55:00

13 Mar 22:57:53
Details
13 Mar 22:44:00
TalkTalk lines lost connection at about 10:30, and are reconnecting at the moment.
Update
13 Mar 22:47:49
About 80% of lines have now reconnected.
Update
13 Mar 22:58:06
Most lines are back.
Resolution Confirmed as a fault within the TalkTalk network that affected us and other ISPs.
Started 13 Mar 22:30:00
Closed 13 Mar 22:57:53

13 Mar 13:12:51
Details
13 Mar 12:44:14
Sorry for the short notice, the Control Pages will be offline for a few minutes this afternoon whilst we carry out some work on the hardware. (no, we are not installing black boxes!). Services will be unaffected, but customer won't be able to access the Control Pages during this time.
Update
13 Mar 13:06:06
This work has started, we expect the Control Pages to be back in 10-15 minutes.
Update
13 Mar 13:13:16
Work has completed successfully
Started 13 Mar 13:00:00
Closed 13 Mar 13:12:51

12 Mar 18:30:05
Details
12 Mar 18:24:06
We allow users to provide address/location data to pass to emergency services in the event of a 999 calls.

This goes via BT and BT are being a pain at present. As we have explained the data is simply a matter of what customers provide. VoIP has no inherent way to provide an accurate geographic location of callers. But BT are unhappy with some of the data and seem to have blocked us sending updates this week.

They even suggested that they have to send OFCOM reports of incomplete and incorrect data as some sort of threat. We know a lot of people have multiple locations from which they make calls. We do this purely as a good will gesture and in the spirit of GC4, because we don't have location data in the first place. So if BT do not stop buggering about soon we'll be reporting BT to OFCOM instead.

Anyway, the upshot is that a few customers that have provided location data over that last few days are not yet updated to BT and hence the emergency services. We are working to resolve this and hope to have it working again soon.

The config pages for your VoIP set up show if there is an issue and you get an email once it is all confirmed. In the mean time, we do apologise for any concern this may cause.

None of our services are provided to be used in any "safety of life" circumstances, as per our standard terms.

Update
16 Mar 16:49:34
We are hoping to have this sorted, perhaps today.

It is likely that anyone with existing data registered may get a new email at some point confirming the data as we are planning to re-send everything from scratch just to be sure.

We would ask that customers review what they have entered and ensure it is as accurate as possible to assist emergency services in the event of a 999 call.

Update
18 Mar 17:40:37
Data is updating again to BT. We would recommend customers ensure accurate data on the control pages for 999 calls. We will review this over the next few days, and customers may still get update emails in due course.
Started 8 Mar
Previously expected 16 Mar

12 Mar 12:01:00
Details
12 Mar 09:50:23
TalkTalk have a a fibre outage affecting EoFTTC lines in the East Midlands and East Anglia area. Lines will be in sync, but will not have any further connectivity. TalkTalk are aware and have engineers investigating. They have their own status post here: http://solutions.opal.co.uk/network-status-report.php?reportid=6291
Update
12 Mar 10:18:51
Not further updates as yet.
Update
12 Mar 10:20:37
Update from TT: Our NOC and Network Support Engineers have confirmed that corrective work to restore services to all impacted exchanges remains on-going.
Update
12 Mar 11:44:30
Update from TalkTalk: "Our NOC Engineers have confirmed that all network alarms have cleared and normal working service is now fully restored to all impacted exchanges and customers. There are continued reports of some B2B circuits continuing to experience issues and our IP Engineers are currently investigating these reports. Incident Management will continue to liaise with our NOC Engineers and further updates will be provided upon receipt."
Update
12 Mar 14:20:48
Service has been restored.
Started 12 Mar 00:10:00
Closed 12 Mar 12:01:00

6 Mar 13:00:00
Details
5 Mar 10:54:01
We are seeing quite a few lines on the Durham and NEW BRANCEPETH exchange with a connection problem. Customers may have no internet access and their router constantly logging in and out.

We have reported this fault to BT and they are investigating. It looks like the BRAS has a fault.

Update
5 Mar 16:25:28
We have been chasing this with BT throughout the day; their tech team are still investigating.
Update
6 Mar 09:16:19
We still have a few lines off and we are on the phone to BT now chasing this. Update to follow.
Update
6 Mar 13:48:38
BT appear to have a broken LAG (link aggregation group) and as a work around we have had to change one of our end point IP addresses and the affected customers are back on line. This is just a work around and hopefully BT will shortly fix their end.
Started 5 Mar 02:00:00
Closed 6 Mar 13:00:00

7 Mar 16:33:26
Details
7 Mar 16:33:26
We have been updating the systems that handle advising BT of location data for 999 calls, and this has resulted in a number of emails yesterday confirming previously entered details.

Sorry for any confusion caused.

Started 6 Mar
Previously expected 7 Mar

6 Mar 18:14:21
Details
6 Mar 16:05:00
We have identified that Ethernet customers are not seeing archived graphs for last few days and the cause is part of the recent router upgrade code. We plan to carry out a further upgrade this evening which should have little or no disruption, and should rectify this issue.
Update
6 Mar 16:53:29
There may be a blip on some TalkTalk lines during this upgrade.
Resolution Work completed for now - seems to have gone as planned. There was a small TalkTalk line blip as expected.
Started 6 Mar 17:00:00
Closed 6 Mar 18:14:21
Previously expected 6 Mar 19:00:00