Order posts by limited to posts

30 Jan 08:50:13
Posted: 9 Jan 13:42:48
One of our BT links dropped its connection - this caused some DSL circuits dropped - they should reconnect soon if not already
9 Jan 13:47:13
The fibre is back up, and most DSL circuits have not reconnected.
9 Jan 13:50:58
The link dropped again (whilst we were talking to BT!). We'll take the link out of service for the time being. Affected lines will connect on our other links.
9 Jan 13:58:32
Again, lines have mostly all reconnected. We have taken the affected fibre out of service whilst we investigate the cause with BT.
9 Jan 14:33:09
The network is stable and BT are investigating the cause. The fibre is still out of service whilst we investigate the cause with BT.
11 Jan 12:55:17
BT were unable to find a fault when the investigated this on Tuesday. However, the circuit is dropping out again today. This has caused PPP reconnects for some customers. We are back on the case with BT.
11 Jan 13:10:35

For the record and to help diagnostics of broadband faults, this incident has caused PPP drops at the following times:

2018-01-09 13:39
2018-01-09 13:48
2018-01-11 12:39
2018-01-11 12:33
2018-01-11 11:55
11 Jan 14:22:04
BT are investigating.
Started 9 Jan 13:38:00

09 Mar 2017 20:00:00
Posted: 08 Mar 2017 12:29:14

We continue to work with TalkTalk to get to the bottom of the slow throughput issue as described on https://aastatus.net/2358

We will be performing some routing changes and tests this afternoon and this evening, we are not expecting this to cause any drops for customers, but this evening there will be times when throughput for 'single thread' downloads will be slow. Sorry for the short notice, please bear with us, this is being a tricky fault to track down.

08 Mar 2017 22:39:39
Sorry, due to TalkTalk needing extra time to prepare for their changes this work has been moved to Thursday 9th evening.
Started 09 Mar 2017 20:00:00
Update was expected 09 Mar 2017 23:00:00

Posted: Today 15:44:51

BT are needing to do essential work on the equipment that our hostlinks are connected to at their end. This does mean an outage that will affect all BT provided ADSL, FTTC and FTTP services. (TalkTalk connected circuits are not affected)

Time: 2nd March 2018, between 02:00 – 06:00

More information:

The plan is that the work is carried out between 02:00 – 04:00 using 04:00 – 06:00 as a rollback window if required.
We expect the downtime to be approximately 30 mins or less, between 02:00 – 03:00 approximately.

Expected close 2 Mar 06:00:00

Posted: 15 Dec 2017 11:52:57

We have lots of work happening in January regarding our interconnects to both BT and TalkTalk:

New physical fibres to BT: This increases our capacity. New TalkTalk network: This increases TalkTalk's capacity.

We already have customers trialling the new TalkTalk network and we'll be asking for customers to trial the new BT interconnects early January.

Both of these changes will not require our customers to make any changes, and we'll be moving circuits to the new networks overnight on separate occasions. We've not got the dates of these works yet, but this status post will be updated early January with more details.

8 Jan 13:53:10
BT testing is underway: https://aastatus.net/2488
11 Jan 13:34:23
Moving customers the new BT pipes will start from Monday 15th January: https://aastatus.net/2490
11 Jan 13:35:03
We're still waiting for TalkTalk to finish some work on their side before we can start moving TalkTalk customers over.
26 Jan 14:01:37
BT circuits are now on the new interconnects. Due to TalkTalk needing to wait for new software releases from their vendor (Juniper), moving our TalkTalk circuits over to their new network won't happen until mid-March 2018.
Update expected 15 Mar 13:00:00
Previously expected 21 Jan 22:00:00

26 Jan 14:02:59
Posted: 09 Nov 2017 15:47:52
"TalkTalk have built a new Juniper LTS solution in order to:
  • Increase the total subscribers per TalkTalk LTS
  • Increase of the throughput capacity per chassis
  • Assist with load balancing
  • Increase network resilience"

In other words, they have been working on a new network for some time which should see the end of the various minor congestion issues that we've been seeing and will mean the have ample capacity for the future.

Before we migrate all our customers over we are asking for some customers to test this new network. Testing simply means us making some changes and the customer changing their PPP username slightly. Everything else works the same.

If you would like to test, please send an email with your login to: trial@aa.net.uk and we'll get back to you with further instructions.

We are one of the first ISPs that will be using the new network and we expect to move all our customers over before the end of November but we have not set a date yet.

22 Nov 2017 16:04:33
Customers on the TalkTalk trial will be moved to a new LTS this afternoon. The LTS is the router at TalkTalks end of our links to them. This involves a PPP re-connect and this will be carried out by TalkTalk. Your router shoudl log back in within a few seconds. This new LTS has newer software which we're testing to see if some of the bugs we've reported are fixed. We'll post more about this shortly.
12 Dec 2017 09:08:54
Update: The new network on the whole is performing well. We have been working with TalkTalk engineers and Juniper developers to help fix some problems. The main problem is that some lines fail to establish PPP, we have implemented a workaround on our LNSs and Juniper working towards a permanent fix for this.
We don't yet have a date when we'll move all customers across to the new network. We do hope this will be before Christmas bust due to the holidays it may be early January.
26 Jan 14:02:59
Due to TalkTalk needing to wait for new software releases from their vendor (Juniper), moving our TalkTalk circuits over to their new network won't happen until mid-March 2018. We are still happy to have customers on the trial.
Started 09 Nov 2017 15:30:00
Update expected 15 Mar 13:00:00

25 Jan 11:00:03
Posted: 25 Jan 11:32:23
Between 10:45:43 - 10:57:33 we saw several blips on half of our new BT host links. They do seem stable at the moment; however we are still waiting on BT to comment on the cause.
Resolution BT Have confirmed that this was a problem on their side. We are awaiting further details about the incident.
Started 25 Jan 10:45:43
Closed 25 Jan 11:00:03

15 Jan 13:02:42
Posted: 8 Jan 13:49:47

As mentioned in https://aastatus.net/2482 we are installing new fibres into BT to increase our capacity. We are now ready for customers on BT backhaul ADSL and VDSL circuits to test this.

Customers prefix their username in their router withe "test-" which will then connect through the new fibres. (eg if your username is example@a.1, change it to test-example@a.1) There should be no other differences in the service. We'd appreciate any feedback and observations being sent to trial@aa.net.uk

All being well, we hope to move the rest of the customer circuits in stages next week at which point you can remove the "test-" from your username.

8 Jan 13:54:42
If customers go over their usage tariff during the testing we'll apply free top-ups - email trial@aa.net.uk if this happens or if you think it is likely.
11 Jan 13:43:35

From Monday 15th January we will be moving circuits over to the new BT pipes, this process will take a number of days and therefore the trial will stop on 24th Jan.

Customers who changed their username can change it back on or before 24th January. Thank you for everyone's feedback and cooperation.

Started 8 Jan 13:00:00

26 Jan 09:00:00
Posted: 11 Jan 13:33:26

We do upgrades on our L2TP Network Servers over night one at a time as a rolling upgrade from time to time. Starting from Monday 1AM we starting the process. This will be ongoing over the next two weeks, impacting a small proportion of customers each night with a PPP drop and reconnect using the preferred time as set on the control pages. Depending on your equipment this could be anything from a fraction of a second outage to a few minutes.

In tandem with this, circuits on BT back-haul will reconnect using our new fibres into BT.

12 Jan 11:51:52
A few BT back-haul connections on the "B" LNS moved to the "A" LNS just now, this was not quite as planned, but has proved to be a good test of the new BT links. Sorry for any inconvenience. We expect the further rolling updates and switches to new links to be somewhat more seamless during next week now.
19 Jan 08:28:29
We have now upgraded half of our LNSs and half of our BT circuits are on our new BT interconnects.
Resolution All circuits are now on the new Interconnects.
Started 15 Jan 01:00:00
Closed 26 Jan 09:00:00
Previously expected 24 Jan 10:00:00

3 Jan 17:27:06
Posted: 3 Jan 17:26:31
We have upgraded our order pages to allow ordering and regrading to the new terabyte tariffs on BT lines (as was only on TT lines before). Not available on 20CN, but otherwise you can now select 1TB (on Home::1) or 2TB (on SoHo::1) as just another tariff choice. As usual regrades take effect from next month, and the quota bonus system applies to this tariffs as well. Importantly they can be "balanced" with lines on the same site that are not on terabyte levels allowing multi-line sites to have different tariffs on each line simply shared in total by both lines.
Started 3 Jan 17:26:49

13 Dec 2017 16:11:18
Posted: 27 Nov 2017 20:17:21

As you know, we are one of the few ISPs that monitor every line every second for loss, latency and throughput, and provide that data to customers to see how well your line is performing, and how we are performing as your ISP.

Well, the latest FireBrick code now provides these graphs in SVG format which means awesome scaleable graphs with even more detail and much clearer and easier to understand.

We are rolling out these as part of LNS upgrades over the coming weeks, and will have all systems using them soon. In the mean time you may see a mix of old (PNG) and new (SVG) graphs.

I hope you appreciate them as much as I do, and the hard work we have put in to making them. We take the quality of our service very seriously which is why we make this available to customers, and the nicer and clearer we can make that - the better.

Started 27 Nov 2017

13 Dec 2017 16:00:00
Posted: 21 Oct 2017 17:30:55
We have identified a small number of TalkTalk connected circuits that have 1-4% packetloss in the evenings between 7pm and 9pm. We have worked with TalkTalk NOC and have helped them to identify a link running at full capacity within their network. Work is still ongoing to fix this.
21 Oct 2017 17:42:22
To aid with TalkTalks investigations we have shutdown our interconnect in Harbour Exchange to TalkTalk. We still have plenty of capacity in Telehouse so this work is not service affecting.
21 Oct 2017 17:50:43
TalkTalk say: "NOC troubleshooting has identified that one of three 10 gig circuit between the LTS and the LDN at hex is reaching capacity between 19:30 and 21:30hrs and is the root cause of the packet loss."
10 Nov 2017 11:23:51
See: https://aastatus.net/2454 for details regarding testing the new TalkTalk network which will resolve this.
22 Nov 2017 21:33:35
packetloss on TalkTalk connected circuits has increased over the past day or so. We're happy to move customers to trial TalkTalk's new network, do get in touch with trial@aa.net.uk
24 Nov 2017 15:02:12
TalkTalk have done some work on their side which should help reduce the low levels of packetloss that some lines are seeing.
12 Dec 2017 09:11:02
Work that TalkTalk have done recently has helped with congestion and we are now seeing far fewer lines with congestion.
We don't yet have a date when we'll move all customers across to the new network. We do hope this will be before Christmas bust due to the holidays it may be early January.
Resolution The congestion problem has been resolved. The longer term option of moving our customer base to TalkTalk's new network will happen in January 2018. More details of this will be posted nearer the time.
Broadband Users Affected 1.50%
Started 18 Oct 2017 19:00:00
Closed 13 Dec 2017 16:00:00

29 Nov 2017 17:17:11
Posted: 29 Nov 2017 17:17:01
One of our LNS restarted (F), and not entirely clear why but does not look like any sort of attack. It came back in seconds and lines are reconnecting as they should.
Broadband Users Affected 9%
Started 29 Nov 2017 17:17:11
Closed 29 Nov 2017 17:17:11

04 Nov 2017 11:54:31
Posted: 25 Nov 2017 17:32:40

We don't usually bother with a PEW for this as it is routine, but we started some updates last night. As we have so many LNSs now it takes a couple of weeks to complete an update.

Various changes, but one of the awesome new features is SVG based CQM graphs. Customers will start to see these on clueless over the coming month.

29 Nov 2017 17:21:08
This is taking a few more says than normal, but is progressing. We have had a lot of feedback on the new graphs from my blog and so will be offering a range of options.
Started 25 Nov 2017
Closed 04 Nov 2017 11:54:31

23 Nov 2017 21:18:37
Posted: 16 Oct 2017 14:03:30

There are lots of news articles and discussions about the 'KRACK' attack vulnerability affecting WiFi client devices.

In summary, this affects WPA2 (and WPA3) (as well as being an additional insecurity with TKIP) And is actually a bug in the WiFi spec - ie the design wasn't thought out properly. So any implementation which follows the spec is vulnerable. There is more technical information about this in the links below.

From an customer of AAISP point of view, we do sell DSL routers with WiFI as well as WiFi access points but, this is a vulnerability in WiFi clients rather than the routers themselves.

Generally, the fix for this vulnerability is on the client side - ie the computer or mobile device connecting to a WiFi network, and so customers should look for software updates for their devices and operating systems. There are links below regarding devices we are involved with which contain further information.

We'll add further information to this post as we receive it.

16 Oct 2017 14:04:01

For more information see: https://www.krackattacks.com/ and all the gory details are in the paper: https://papers.mathyvanhoef.com/ccs2017.pdf

WiFi device specific information:

Remember the main devices to patch are your devices such as computers, phones, tablets etc - once updates have been released and many devices will already have been patched if they are up to date. Updates to Routers and access points will will address the problem when they are acting as a WiFi client themselves and so won't really help if your devices are still unfixed.

Started 16 Oct 2017 10:00:00

21 Nov 2017 18:00:00
Posted: 21 Nov 2017 13:32:59
We are upgrading some routers today - this generally has no impact, but there is a risk of some small issues. Sorry for very short notice.
Started 21 Nov 2017 13:00:00
Closed 21 Nov 2017 18:00:00
Previously expected 21 Nov 2017 19:00:00

17 Nov 2017 18:34:27
Posted: 17 Nov 2017 11:36:33

We are seeing a denial of service attack, which is causing more problems that usual, and this is disrupting traffic to some customers - but it is moving, and so different customers at different times.

Obviously we are working on this, and unfortunately cannot say a lot more of the details.

17 Nov 2017 12:09:40
We are still seeing problems, customer on this LNS would have seen drops and routing problems. We are working on this.
17 Nov 2017 12:10:32
This problem has also been affecting some of our transit routes, and so routing to parts of the internet will have had problems too.
17 Nov 2017 12:42:15
We are working through this still, we have moved lines off A.Gormless, and have had further problems with those lines. Please do bear with us, we hope things to calm down shortly.
17 Nov 2017 13:39:21
We cannot go in to a lot of detail but I will say this is a complete new sort of attack. We have made some changes and will be reviewing ways we can mitigate attacks like this in the future. I'll re-open this issue if problems continue. Thank you all for your patience.
17 Nov 2017 13:54:44
Hmm, that attacker is clearly back from lunch.
17 Nov 2017 15:15:18
Not gone away - we are working on more...
17 Nov 2017 16:21:02
This now appears to be affecting VoIP too.
17 Nov 2017 17:19:54
We're rebalancing some lines due to the issues early morning today as per https://aastatus.net/2457 and additionally due to today's issues.
Some PPP sessions will disconnect and shortly reconnect. This is to fix an imbalance in the number of sessions we have per LNS.
17 Nov 2017 17:34:50
The line rebalancing is fine, and mostly broadband is fine, but we are tackling a moving target aiming at various services on an ongoing basis.
Resolution Quiet for now, we are monitoring still.
Started 17 Nov 2017 11:25:00
Closed 17 Nov 2017 18:34:27

19 Nov 2017 18:11:03
Posted: 19 Nov 2017 18:11:03
At the end of last week we installed three additional LNSs, all FireBrick FB6202 routers - these are the devices that sit between our carriers such as BT and TalkTalk and our network and that our broadband connections are terminated on. The addition of these three increase our capacity and helps spread broadband connection over more routers. We now have eleven live and one spare. These are the 'A-L.Gormless' named hosts that you'd see on the control pages etc.
Started 19 Nov 2017 18:00:00

17 Nov 2017 13:40:06
Posted: 17 Nov 2017 01:26:59
At 00:42, a large number of lines across all carriers disconnected, and then reconnected. Some lines may have taken longer than others to come back, however session numbers now are noted to have slowly recovered to their usual levels. During this outage, a significant number of BT lines have ended up biased to one LNS in particular which will need dealing with.
As session numbers have stabilised and traffic levels look normal, a further investigation into this event will follow in the morning, along with plans to move customers off of I.gormless where the large number of BT sessions have accumulated.
17 Nov 2017 10:50:48
We expect to do some PPP restarted early evening before the evening peak traffic to put lines back on the right LNS. This will not be all customers. We are then looking to do some PPP restarts over night during the weekend to distribute over the newly installed LNSs.
Resolution The original issue has gone away, because of other problems today (DoS attack) we expect to be rebalancing lines later in the day and over night anyway. Thank you for your understanding.
Broadband Users Affected 90%
Started 17 Nov 2017 00:41:25 by AA Staff
Closed 17 Nov 2017 13:40:06

10 Nov 2017 12:48:12
Posted: 10 Nov 2017 12:17:54

OFCOM have today published their decision on automatic compensation for broadband and phone line faults. The result is a scheme adopted only by the major ISPs to pay compensation for delayed installs, delayed fault repair and missed appointments.

As a small ISP, we are relieved at this approach as the original proposals contained a number are areas of serious concern. This is good news for consumers generally and should hopefully mean that Openeach and back-haul carriers that we use put in place systems to pay automatic compensation (though OFCOM have not insisted on this). If this happens we will be able to pass on any compensation we receive automatically to AAISP customers as well.

So, whilst not part of the scheme, AAISP customers should benefit from automatic compensation in most cases.

We also hope, as do OFCOM, that in the 15 months lead up to the scheme starting, the likes of Openreach will actually improve services so as to avoid having to pay compensation. Obviously this will benefit all ISPs and their customers, not just those in the scheme.

Started 10 Nov 2017 12:18:04

07 Nov 2017 15:30:00
Posted: 01 Nov 2017 17:22:57
We are upgrading one of our LINX peering ports to 10G in the afternoon of 2nd November. We do not expect this to affect customers in anyway.
02 Nov 2017 14:14:50
This work is being carried out now.
06 Nov 2017 12:10:27
We had to postpone this upgrade. It will now happen on Tuesday 7th November.
07 Nov 2017 13:47:00
This work is being carried out now.
Resolution This has been completed successfully.
Started 07 Nov 2017 13:30:00
Closed 07 Nov 2017 15:30:00

03 Nov 2017 20:55:02
Posted: 03 Nov 2017 19:56:18
We had an unexpected reset of the 'B.Gormless' LNS at 19:45. This caused customers on this LNS to be disconnected. Most have reconnected now though.
Resolution This LNS has reset a few times in the past few months. We'll look at replacing the hardware.
Started 03 Nov 2017 19:45:00
Closed 03 Nov 2017 20:55:02

03 Nov 2017 12:00:00
Posted: 02 Nov 2017 14:14:19

So as to facilitate some testing we're doing with TalkTalk, we are going to upgrade the software on the H.Gormless LNS. This will involve moving customers off this LNS first.

Therefore, customers on the H.Gormless LNS will have their PPP reset during the early hours of Friday 3rd November from 1AM. This will force your router to be logged off. It will then log back in to a new LNS. This usually happens very quickly.

The exact time will depend on your Line's 'LNS reset' setting, the default is 1AM, but you are able to pick the time. For more information about this process see: https://support.aa.net.uk/LNS_Switches

03 Nov 2017 15:15:47
There is a further switch back from I to H gormless tonight (Saturday morning). This means we can now do the testing we need with Talk Talk.
03 Nov 2017 15:16:07
A further rolling update of all LNSs is expected later in the month.
Started 03 Nov 2017 01:00:00
Closed 03 Nov 2017 12:00:00

02 Nov 2017 22:05:00
Posted: 02 Nov 2017 21:59:48
We're experiences high traffic levels on some of our core routers - we're investigating, but this may be causing some disruption for customers.
02 Nov 2017 22:06:32
Thing are looking back to normal now...
Started 02 Nov 2017 21:45:00
Closed 02 Nov 2017 22:05:00

20 Oct 2017 15:01:04
Posted: 20 Oct 2017 15:01:04

Multiple vulnerabilities have been reported in dnsmasq, the service on the ZyXEL routers which provides a DNSresolver, DHCP functions and router advertisement for IPv6.
A list of these can be found here.

The scope of these is relatively broad, and most attack vectors are local. causing the router to fail DNS/DHCP/RA if exploited (DoS condition), one additional vector includes execution of arbitrary code.
As dnsmasq is tightly integrated into the router, it can't just be turned off, however as a workaround you can change the DNS servers the routers serves in it's DHCP server to minimise potential impact from a local attacker should they perform a DNS request in a way which exploits the DoS conditions. This can be performed by going to the Router Settings page on your control pages, and entering our nameservers and into the IPv4 DNS fields, clicking save, then clicking on the DHCP button at the bottom (for B10Ds only) or clicking on Send ZyXEL configuration (B10As or B10Ds, will wipe existing settings on the router)

An official patch from ZyXEL is estimated to be provided during December 2017 for the VMG1312-B10D routers (Firmware V5.13(AAXA.7)), and January 2018 for the the VMG1312-B10A routers.
An update will be made to this post once firmware updates have been released.

30 Jan 13:49:52
Updated software available, version 1.00(AAJZ.14)C0: https://www.zyxel.com/euosearch/dl-search.aspx?mci_country=uk&mci_lang=en&keyword=VMG1312-B10A&submit=Search
Previously expected 1 Feb

16 Oct 2017 14:00:00
Posted: 16 Oct 2017 13:33:49
Texts and emails reporting lines up/down, engineer visits and some other information have been broken over the weekend. This means some delayed messages are being sent out. Sorry for any confusion, it should all be sorted shortly.
Started 16 Oct 2017 13:00:00
Closed 16 Oct 2017 14:00:00
Previously expected 16 Oct 2017 14:00:00

10 Oct 2017 20:03:21
Posted: 26 Apr 2017 10:16:02
We have identified packet loss across our lines at MOSS SIDE that occurs between 8pm to 10pm. We have raised this with BT who suggest that they hope to have this resolved on May 30th. We will update you on completion of this work.
Broadband Users Affected 0.15%
Started 26 Apr 2017 10:13:41 by BT
Closed 10 Oct 2017 20:03:21
Previously expected 30 May 2017 (Last Estimated Resolution Time from BT)

10 Oct 2017 19:58:12
Posted: 10 Oct 2017 20:00:14
The "B" LNS restarted for some reason, we are looking in to why but this will mean some customers lost PPP connectivity. Obviously customers connected back again as fast as their router will retry - either to backup LNS or the "B" LNS which rebooted within a few seconds. Sorry for any inconvenience.
Started 10 Oct 2017 19:52:00
Closed 10 Oct 2017 19:58:12

27 Sep 2017 13:15:00
Posted: 27 Sep 2017 13:06:43
At 13:01 we saw a number of broadband lines drop and reconnect. We are investigating the cause.
27 Sep 2017 13:18:20
This affected circuits on our 'H' and 'I' LNSs, customers on LNSs 'A' through to 'G' were unaffected.
Started 27 Sep 2017 13:01:00
Closed 27 Sep 2017 13:15:00

17 Sep 2017 10:10:00
Posted: 17 Sep 2017 09:42:04
Latest from TalkTalk: BT advise their engineer is due on site at 08:40 to investigate, and they are still attempting to source a Fibre Precision Test Officer. Our field engineer has been called out and is en route to site (ETA 08:30).
17 Sep 2017 09:43:18
TalkTalk say affected area codes are: 01481, 01223, 01553, 01480, 01787, 01353 and maybe others. ( Impacted exchanges are Barrow, Buntingford, Bottisham, Burwell, Cambridge, Crafts Hill, Cheveley, Clare, Comberton, Costessey, Cherry Hinton, Cottenham, Dereham, Downham Market, Derdingham, Ely, Fakenham, Fordham Cambs, Feltwell, Fulbourn, Great Chesterford, Girton,Haddenham, Histon, Holt, Halstead, Harston, Kentford, Kings Lynn, Lakenheath, Littleport, Madingley, Melbourne, Mattishall, Norwich North, Rorston, Science Park, Swaffham, Steeple Mordon, Soham, Sawston, Sutton, South Wootton, Swavesey, Teversham, Thaxted, Cambridge Trunk, Trumpington, Terrington St Clements, Tittleshall, Willingham, Waterbeach, Watlington, Watton, Buckden, Crowland, Doddington, Eye, Friday Bridge, Glinton, Huntingdon, Long Sutton, Moulton Chapel, Newton Wisbech, Parson Drove, Papworth St Agnes, Ramsey Hunts, Sawtry, Somersham, St Ives, St Neots, Sutton Bridge, Upwell, Warboys, Werrington, Whittlesey, Woolley, Westwood, Yaxley, Ashwell, Gamlingay and Potton. )
17 Sep 2017 09:43:37
TalkTalk say: Our field engineer and BT field engineer have arrived at site with investigations to the root cause now underway. At this stage Incident Management is unable to issue an ERT until the engineers have completed their diagnostics.
17 Sep 2017 09:55:09
Some lines logged back it at around 09:48
17 Sep 2017 10:10:17
Most are back online now.
Resolution From TalkTalk: Our NOC advised that alarms cleared at 09:45 and service has been restored. Our Network Support has raised a case with Axians (vendor) as there appeared to be an issue between the interface cards in the NGE router and the backplane (which facilitates data flow from the interface cards through the NGE). This incident is resolved and will now be closed with any further root cause with the Problem Management process.
Started 17 Sep 2017 06:20:00
Closed 17 Sep 2017 10:10:00

13 Sep 2017 16:37:14
Posted: 13 Sep 2017 16:17:34

I am pleased to confirm we have now launched "Quota Bonus"

The concept is simple, and applies to Home::1 and SoHo::1 on all levels including terabyte.

You start your billing month with your quota as normal, but get an extra bonus that is half of the unused quota, if any, from the previous month.

This allows people to build up a reserve and allow for occasional higher months without needing top-up.

Thanks to all of the customers for the feedback on my blog posts on this. --Adrian.

P.S. yes, it is sort of cumulative, see examples on http://aa.net.uk/broadband-quota.html

Started 13 Sep 2017 16:15:00

08 Sep 2017 01:00:00
Posted: 07 Sep 2017 23:04:00
Packet loss has been noted to some destinations, routed via LONAP. Our engineers are currently investigating and attempting to work around the loss being observed.
07 Sep 2017 23:23:52
We have disabled all of our LONAP ports for the moment - this reduces our capacity somewhat, but at this time of day the impact to customers is low. We've seen unconfirmed reports that there is some sort of problem with the LONAP peering network, we are still investigating ourselves. (LONAP is a peering exchange in London which connects up lots of ISPs and large internet companies, it's one of the main ways we connect to the rest of the Internet).
07 Sep 2017 23:24:21
LONAP engineers are looking in to this.
07 Sep 2017 23:31:09
We are now not seeing packet loss on the LONAP network - we'll enable our sessions after getting an 'all-clear' update from LONAP staff.
07 Sep 2017 23:39:44
Packet loss on the LONAP network has returned, we still have our sessions down, we're still waiting for the all-clear from LONAP before we enable our sessions again. Customers are on the whole, unaffected by this. There are reports of high latency spikes to certain places, which may or may not be related to what is happening with LONAP at the moment.
08 Sep 2017 06:57:44
We have re-enabled our LONAP sessions.
Resolution The LONAP peering exchange confirm that they had some sort of network problem which was resolved at around 1AM. It's unconfirmed, but the problem looks to be related to some sort of network loop.
Broadband Users Affected 100%
Started 07 Sep 2017 22:42:00 by AA Staff
Closed 08 Sep 2017 01:00:00
Previously expected 08 Sep 2017 03:01:52 (Last Estimated Resolution Time from AAISP)

05 Sep 2017 14:30:00
Posted: 05 Sep 2017 12:01:04
We are seeing very high latency - over 1,000ms on many lines in the East of England. Typically around the Cambridgeshire/Suffolk area. This is affecting BT circuits, TalkTalk circuits are OK. We are investigating further and contacting BT. We suspect this is a failed link within the BT network in the Cambridge area. More details to follow shortly.
05 Sep 2017 12:28:00

Example line graph.
05 Sep 2017 12:35:38
We're currently awaiting a response from BT regarding this.
05 Sep 2017 12:37:16
BT are now actively investigating the fault.
05 Sep 2017 14:04:16
As expected, this is affecting other ISPs who use BT backhaul.
05 Sep 2017 14:23:02
Latest update from BT:- "The Transmission group are investigating further they are carrying out tests on network nodes, As soon as they have identified an issue we will advise you further. We apologies for any inconvenience caused while testing is carried out."
05 Sep 2017 14:34:35
Latency is now back to normal. We will post again when we hear back from BT.
Resolution BT have confirmed that a card in one of their routers was replaced yesterday to resolve this.
Started 05 Sep 2017 11:00:00
Closed 05 Sep 2017 14:30:00

04 Sep 2017 17:23:17
Posted: 04 Sep 2017 17:22:46

We have a number of tariff changes planned, after a lot of interesting comments from my blog post - thank you all.

Some things are simple, and we are able to do sooner rather than later, like the extra 50GB already announced. Some will not be until mid to late October as they depend on other factors. Some may take longer still.

To try and ensure we get improvements as quickly as possible for customers I am updating a news item on our web site with details as we go.


As you will see, we are testing a change to make top-up on Home::1/SoHo::1 not expire. We have the end of a period (full moon) in two days where we can see if code changes work as expected on a live customer line. If all goes well then later this week we can change the description on the web site and officially launch this change.

Do check that page for updates and new features we are adding as we go.

06 Sep 2017 13:57:18
We have made top-up on Home::1 and SoHo::1 not expire, continuing until you have used it all. This applies to any top-up purchased from now on.
Started 04 Sep 2017 17:19:37

03 Sep 2017 08:14:53
Posted: 03 Sep 2017 08:12:02

We have changed the monthly quota allowances on Home::1 and SoHo::1 today, increasing all of the sub terabyte rates by 50GB per month, without changing prices.

I.e. you now get 200GB for the previous price of 150GB, and 300GB for the previous price of 250GB.

Existing customers have had this additional amount added to their September quota.

Started 03 Sep 2017 08:10:00

13 Dec 2017 16:00:00
Posted: 31 Aug 2017 08:36:41
TalkTalk have lots of small planned work projects happening at the moment. These generally happen from midnight and affect a small number of exchanges at a time. The work does cause service to stop for 30 minutes or longer. TalkTalk publish this information on their status page: https://managed.mytalktalkbusiness.co.uk/network-status/
We are looking at ways of adding these planned works to the Control Pages so as to make it clearer for customers if they are going to be affected.
Resolution Whilst there still may be works happening as usual, the bulk of the upgrades that TalkTalk have been doing are over.
Started 31 Aug 2017 01:00:00
Closed 13 Dec 2017 16:00:00
Previously expected 31 Oct 2017 07:00:00

29 Aug 2017 13:59:08
Posted: 07 Jul 2017 10:39:42

For the past few years we've been supplying the ZyXEL ZyXEL VMG1312-B10A router. This is being discontinued and we will start supplying its replacement, the ZyXEL VMG1312-B10D (note the subtle difference!)

The new router is smaller than the previous one and has a very similar feature set and web interface to the old one.

We are still working through our configuration process and are updating the Support site with documentation. We are hoping this model to resolve many of the niggles we have with the old one too.

Started 07 Jul 2017 13:12:00

29 Aug 2017 13:00:00
Posted: 17 Jun 2017 15:24:16
We've seen very slight packet loss on a number of TalkTalk connected lines this week in the evenings. This looks to be congestion, it's may show up on our CQM graphs as a few pixels of red at the top of the graph between 7pm and midnight. We have an incident open with TalkTalk. We moved traffic to our Telehouse interconnect on Friday afternoon and Friday evening looked to be better. This may mean that th econgestion is related to TalkTalk in Harbour Exchange, but it's a little too early to tell at the moment. We are monitoring this and will update again after the weekend.
19 Jun 2017 16:49:34

TalkTalk did some work on the Telehouse side of our interconnect on Friday as follows:

"The device AA connect into is a chassis with multiple cards and interfaces creating a virtual switch. The physical interface AA plugged into was changed to another physical interface. We suspect this interface to be faulty as when swapped to another it looks to have resolved the packet loss."

We will be testing both of our interconnects individually over the next couple of days.

20 Jun 2017 10:29:05
TalkTalk are doing some work on our Harbour Exchange side today. Much like the work they did on the Telehouse side, they are moving our port. This will not affect customers though.
28 Jun 2017 20:46:34

Sadly, we are still seeing very low levels of packetloss on some TalkTalk connected circuits in the evenings. We have raised this with TalkTalk today, they have investigated this afternoon and say: "Our Network team have been running packet captures at Telehouse North and replicated the packet loss. We have raised this into our vendor as a priority and are due an update tomorrow."

We'll keep this post updated.

29 Jun 2017 22:12:17

Update from TalkTalk regarding their investigations today:- Our engineering team have been working through this all day with the Vendor. I have nothing substantial for you just yet, I have been told I will receive a summary of today's events this evening but I expect the update to be largely "still under investigation". Either way I will review and fire an update over as soon as I receive it. Our Vendor are committing to a more meaningful update by midday tomorrow as they continue to work this overnight.

01 Jul 2017 09:39:48
Update from TT: Continued investigation with Juniper, additional PFE checks performed. Currently seeing the drops on both VC stacks at THN and Hex. JTAC have requested additional time to investigate the issue. They suspect they have an idea what the problem is, however they need to go through the data captures from today to confirm that it is a complete match. Actions Juniper - Review logs captured today, check with engineering. Some research time required, Juniper hope to have an update by CoB Monday. Discussions with engineering will be taking place during this time.
02 Jul 2017 21:19:57

Here is an example - the loss is quite small on individual lines, but as we are seeing this sort of loss on many circuits and the same time (evenings) it make this more severe. It's only due to to our constant monitoring that this gets picked up.

03 Jul 2017 21:47:31
Today's update from Talktalk: "JTAC [TT's vendor's support] have isolated the issue to one FPC [(Flexible PIC Concentrator] and now need Juniper Engineering to investigate further... unfortunately Engineering are US-based and have a public holiday which will potentially delay progress... Actions: Juniper - Review information by [TalkTalk] engineering – Review PRs - if this is a match to a known issue or it's new. Some research time required, Juniper hope to have an update by Thursday"
07 Jul 2017 08:41:26
Update from TalkTalk yesterday evening: "Investigations have identified a limitation when running a mix mode VC (EX4200’s and EX4550's), the VC cable runs at 16gbps rather than 32gbps (16gbps each way). This is why we are seeing slower than expected speeds between VC’s. Our engineering team are working with the vendor exploring a number of solutions."
17 Jul 2017 14:29:29

Saturday 15th and Sunday 16th evenings were a fair bit worse than previous evenings. On Saturday and Sunday evening we saw higher levels of packet loss (between 1% and 3% on many lines) and we also saw slow single TCP thread speeds much like we saw in April. We did contact TalkTalk over the weekend and this has been blamed on a faulty card that TalkTalk had on Thursday that was replaced but has caused traffic imbalance on this part of the network.

We expect things to improve but we will be closely monitoring this on Monday evening (17th) and will report back on Tuesday.

22 Jul 2017 20:23:24
TalkTalk are planning network hardware changes relating to this in the early hours of 1st August. Details here: https://aastatus.net/2414
01 Aug 2017 10:42:58
TalkTalk called us shortly after 9am to confirm that they had completed the work in Telehouse successfully. We will move traffic over to Telehouse later today and will be reporting back the outcome on this status post over the following days.
03 Aug 2017 11:23:55
TalkTalk confirmed that they have completed the work in Harbour Exchange successfully. Time will tell if these sets of major work have helped with the problems we've been seeing on the TalkTalk network; we will be reporting back the outcome on this status post early next week.
10 Aug 2017 16:39:30
The packetloss issue has been looking better since TalkTalk completed their work. We are still wanting to monitor this for another week or so before closing this incident.
29 Aug 2017 13:56:53
The service has been working well over the past few weeks. We'll close this incident now.
Started 14 Jun 2017 15:00:00
Closed 29 Aug 2017 13:00:00

14 Aug 2017 09:14:59
Posted: 11 Aug 2017 18:44:38
We're needing to restart the 'e.gormless' LNS - this will cause PPP to drop for customers. Update to follow.
11 Aug 2017 18:46:19
Customer on this LNS should be logging back in - (if not already)
11 Aug 2017 19:00:27
There are still some lines left to log back in, but most are back now
11 Aug 2017 19:10:47
Most customers are back now.
13 Aug 2017 12:12:47
This happened again on Sunday morning, and again a restart was needed. The underlying problem is being investigated.
Resolution We have now identified the cause of the issue that impacted both "careless" and "e.gormless". There is a temporary fix in place now, which we expect to hold, and the permanent fix will be deployed on the next rolling update of LNSs.
Started 11 Aug 2017 18:30:00
Closed 14 Aug 2017 09:14:59

13 Jul 2017 18:00:00
[Broadband] TT blip - Closed
Posted: 13 Jul 2017 11:21:37
We are investigating an issue with some TalkTalk lines that disconnected at 10:51 this morning, most have come back but there are about 20 that are still off line. We are chasing TalkTalk business.
13 Jul 2017 11:23:50
Latest update from TT..... We have just had further reports from other reseller are also experiencing mass amount of circuit drops at the similar time. This is currently being investigated by our NOC team and updates to follow after investigation.
Started 13 Jul 2017 10:51:49 by AAISP Pro Active Monitoring Systems
Closed 13 Jul 2017 18:00:00
Previously expected 13 Jul 2017 15:19:49

19 Jul 2017
Posted: 07 Feb 2017 14:32:32

We are seeing issues with IPv6 on a few VDSL cabinets serving our customers. There is no apparent geographical commonality amongst these, as far as we can tell.

Lines pass IPv4 fine, but only intermittently passing IPv6 TCP/UDP for brief amounts of time, usually 4 or so packets, before breaking. Customers have tried BT modem, Asus modem, and our supplied ZyXEL as a modem and router, no difference on any. We also lent them a FireBrick to do some traffic dumps.

Traffic captures at our end and the customer end show that the IPv6 TCP and UDP packets are leaving us but not reaching the customer. ICMP (eg pings) do work.

The first case was reported to us in August 2016, and it has taken a while to get to this point. Until very recently there was only a single reported case. Now that we have four cases we have a bit more information and are able to look at commonalities between them.

Of these circuits, two are serving customers via TalkTalk and two are serving customers via BT backhaul. So this isn't a "carrier network issue", as far as we can make out. The only thing that we can find that is common is that the cabinets are all ECI. (Actually - one of the BT connected customers has migrated to TalkTalk backhaul (still with us, using the same cabinet and phone line etc) and the IPv6 bug has also moved to the new circuit via TalkTalk as the backhaul provider)

We are working with senior TalkTalk engineers to try to perform a traffic capture at the exchange - at the point the traffic leaves TalkTalk equipment and is passed on to Openreach - this will show if the packets are making it that far and will help in pinning down the point at which packets are being lost. Understandably this requires TalkTalk engineers working out of hours to perform this traffic capture and we're currently waiting for when this will happen.

02 Mar 2017 11:14:48
Packet captures on an affected circuit carried out by TalkTalk have confirmed that this issue most likely lies in the Openreach network. Circuits that we have been made aware of are being pursued with both BT and TalkTalk for Openreach to make further investigations into the issue.
If you believe you may be affected please do contact support.
17 Mar 2017 09:44:00
Having had TalkTalk capture the traffic in the exchange, the next step is to capture traffic at the road-side cabinet. This is being progresses with Openreach and we hope this to happen 'soon'.
29 Mar 2017 09:52:52
We've received an update from BT advising that they have been able to replicate the missing IPv6 packets, this is believed to be a bug which they are pursuing with the vendor.

In the mean time they have also identified a fix which they are working to deploy. We're currently awaiting further details regarding this, and will update this post once further details become known.
18 May 2017 16:30:59
We've been informed that the fix for this issue is currently being tested with Openreach's supplier, but should be released to them on the 25th May. Once released to Openreach, they will then perform internal testing of this before deploying it to their network. We haven't been provided with any estimation of dates for the final deployment of this fix yet.
In the interim, we've had all known affected circuits on TalkTalk backhaul have a linecard swap at the cabinet performed as a workaround, which has restored IPv6 on all TT circuits known to be affected by this issue.
BT have come back to us suggesting that they too have a workaround, so we have requested that it is implemented on all known affected BT circuits to restore IPv6 to the customers known to have this issue on BT backhaul.
Resolution A fix was rolled out on the last week of June, re-testing with impacted customers has showed that IPv6 is functioning correctly on their lines again after Openreach have applied this fix.
Broadband Users Affected 0.05%
Started 07 Feb 2017 09:00:00 by AA Staff
Closed 19 Jul 2017

14 Jul 2017 16:48:39
Posted: 14 Jul 2017 10:33:01
We've just seen BT and TT lines drop. We are investigating.
14 Jul 2017 10:57:12
This is similar to yesterday's problem. We have lost BGP connectivity to both our carriers. Over the past fee minutes the BGP sessions have been going up and down, meaning customers are logging in and then out again. Updates to follow shortly.
14 Jul 2017 11:16:46
Sessions are looking a bit more stable now... customer lines are still reconnecting
14 Jul 2017 11:56:08
We have about half DSL lines logged in, but the remaining half are struggling due to what is looking like a Layer 2 issue on our network.
14 Jul 2017 12:23:19
More DSL lines are now back up. If customers are still off, a reboot may help.
14 Jul 2017 12:35:51
A number of TT and BT lines just dropped. They are starting to reconnect now though.
14 Jul 2017 12:52:31
This is still on going - most DSL lines are back, some have been dropping but the network is still unstable. We are continuing to investigate this.
14 Jul 2017 12:55:50
We managed to get a lot of lines up after around an hour, during which there was a lot of flapping. The majority of the rest (some of the talk talk back haul lines) came up, and flapped a bit, at 12:00 and came back properly around 12:40. However, we are still trying to understand the issue, and still have some problems, and we suspect there may be more outages. The problem appears to be something at layer 2 but impacting several sites at once.
14 Jul 2017 14:41:23
We believe these outages are due to a partial failure of one of our core switches. We've moved most services away from this switch and are running diagnostics on it at the moment. We are not expecting these diagnostics to affect other services.
14 Jul 2017 17:00:40
The network has been stable for a good few hours now. One of our 40G Telehouse-to-Harbour Exchange interconnects has been taken down and some devices have been moved off of the suspect switch. We have further work to do in investigating the root cause of this and what we plan to do try to stop this from happening again. We do apologise to our customers for the disruption these two outages have caused and we work on trying to prevent this from happening again.
Started 14 Jul 2017 10:30:00
Closed 14 Jul 2017 16:48:39

13 Jul 2017 21:38:01
Posted: 13 Jul 2017 18:56:13
Multiple circuits have disconnected and reconnected, staff are investigating
13 Jul 2017 19:00:34
Sessions seem to be repeatedly flapping rather than reconnecting - staff are investigating.
13 Jul 2017 20:05:47
We are still working on this, it's a rather nasty outage I'm afraid and is proving difficult to track down.
13 Jul 2017 20:17:42
Lines are re-connecting now...
13 Jul 2017 20:19:16
Apologies for the loss of graphs on a.gormless. We usually assume the worst and that out kit is the cause and tried a reboot of a FireBrick LNS. It did not help, but did clarify the real cause which was cisco switches. Sorry for the time it took to track this one down.
13 Jul 2017 20:19:27
Some line are being forced to reconnect so as to move them to the correct LNS, this will cause a logout/login for some customers...
13 Jul 2017 20:25:52
Lines are still connecting, not all are back, but the number of connected lines is increasing.
13 Jul 2017 20:29:55
We're doing some work which may cause some lines to go offline - we expect line to start reconnecting in 10 minutes time.
13 Jul 2017 20:34:35
We are rebooting stuff to try and find the issue. This is very unusual.
13 Jul 2017 20:42:56
Things are still not stable and lines are still dropping. We're needing to reboot some core network switches as part of our investigations and this is happening at the moment.
13 Jul 2017 20:52:29
Lines are reconnecting once more
13 Jul 2017 21:17:36
Looking stable.
13 Jul 2017 21:23:48
Most lines are back online now, if customers re still not online they a reboot of the router or modem may be required as the session may have got stuck inside the back haul network.
13 Jul 2017 21:41:38

We'll close this incident as lines have been stable for an hour. We'll update the post with further information as to the cause and any action we will be taking to help stop this type of incident from happening again.

We would like to thank our customers for their patience and support this evening. We had many customers in our IRC channel who were in good spirits and supportive to our staff whilst they worked on this incident.

14 Jul 2017 14:42:20
A similar problem occurred on Friday morning, this is covered on the following post: https://aastatus.net/2411
Closed 13 Jul 2017 21:38:01

10 Jul 2017 02:21:59
[Broadband] BT blip - Closed
Posted: 10 Jul 2017 02:16:23
Looks like all lines on BT backhaul blipped at just before 2am. Lines reconnected right away though. Some lines on wrong LNS now so we may move them back - which with show a longer gap in the graphs.
10 Jul 2017 02:22:25
Sessions are all back, and on the right LNS again.
Started 10 Jul 2017 01:59:03
Closed 10 Jul 2017 02:21:59
Previously expected 10 Jul 2017 02:30:00

03 Jun 2017 17:28:30
Posted: 03 Jun 2017 17:06:27
Something definitely not looking right, seems to be intermittent and impacting Internet access.
03 Jun 2017 17:10:34
Looks like a denial of service attack of some sort.
03 Jun 2017 17:17:45
Looks like may be more widespread than just us.
03 Jun 2017 17:23:17
Definitely a denial of service attack, impacted some routers and one of the LNSs. Some graphs lost.
Resolution Target isolated for now.
Started 03 Jun 2017 16:59:05
Closed 03 Jun 2017 17:28:30

08 Jun 2017 10:49:27
Posted: 07 Jun 2017 10:33:00
We are seeing some customers who are still down following a blip within TalkTalk. We currently have no root cause but are investigating.
07 Jun 2017 11:13:21
A small number of lines are still down, however most have now resumed service. We are still communicating with TalkTalk so we can restore service for all affected lines.
07 Jun 2017 11:23:02
Looks like we're seeing another blip affecting many more customers this time. We are still speaking to TalkTalk to determine the cause of this.
07 Jun 2017 11:59:53

TalkTalk have raised an incident with teh following information:

"We have received reports from a number of B2B customers (Wholesale ADSL) who are experiencing a loss of their Broadband services. The impact is believed to approximately 600 lines across 4 or 5 partners. All of the impacted customers would appear to route via Harbour Exchange. Our NOC have completed initial investigations and have passed this to our IP operations team to progress. "

As a result, we'll move TalkTalk traffic away from the Harbour Exchange datacentre to see if it helps. This move will be seamless and will not affect other customers.

07 Jun 2017 12:05:38
Our TalkTalk traffic has now been moved away from HEX89, if There are still a small number of customers offline, if they reboot their router/modem that may force a re-connection and a successful login.
07 Jun 2017 12:37:29
At 12:29 we saw around 80 lines drop, most of these are back online as of 12:37 though. The incident is still open with TalkTalk engineers.
07 Jun 2017 13:19:57
TalkTalk are really not having a good day. We're now seeing packetloss on lines as well as a few more drops. We're going to bring the HEX89 interconnect back up in case that is in any way related, we're also chasing TT on this.
07 Jun 2017 14:37:21
This is still an open incident with TalkTalk, it is affecting other ISPs using TalkTalk as their backhaul. We have chased TalkTalk for an update.
07 Jun 2017 15:37:21

Update from TalkTalk: "Network support have advised that service has been partially restored. Currently Network Support are continuing to re-balance traffic between both LTS’s (HEX & THN). This work is currently being completed manually by our Network support team who ideally need access to RadTools to enable them to balance traffic more efficiently. We are currently however experiencing an outage of RadTools which is being managed under incident 10007687. We will continue to provide updates on the progress as soon as available."

Probably as a result, we are still seeing low levels of packetloss on some TalkTalk lines.

07 Jun 2017 16:49:12
It's looking like the low levels of packetloss stopped at 16:10. Things are looking better.
08 Jun 2017 08:31:43
There are a handful of customers that are still offline, we have sent the list of these circuits to TalkTalk to investigate.
08 Jun 2017 10:26:02

Update from TalkTalk: "We have received reports from a number of B2B customers (Wholesale ADSL & FTTC) who are experiencing authentication issues with their Broadband services. The impact is believed to approximately 100 lines across 2 partners. All of the impacted customers would appear to route via Harbour Exchange. Our NOC have completed initial investigations and have passed this to our Network support team to progress."

We have actaully already taken down our Harbour Exchange interconnect but this has not helped.

08 Jun 2017 10:49:27
Over half of these remaining affected lines logged back in at 2017-06-08 10:38
08 Jun 2017 11:22:39
The remaining customers offline should try rebooting their router/modem and if still not online then please contact Support.

From TalkTalk: The root cause of this issue is believed to have been caused by a service request which involved 3 network cards being installed in associated equipment at Harbour exchange. This caused BGP issues on card (10/1). To resolve this Network Support shut down card (10/1) but this did not resolve all issues. This was then raised this to Ericsson who recommended carrying out an XCRP switchover on the LTS. Once the switchover was carried out all subscribers connections dropped on the LTS and the majority switched over to the TeleHouse North LTS. Network support then attempted to rebalance the traffic across both LTS platform however were not able to due to an ongoing system incident impacting Radius Tools. Network support instead added 2 new 10G circuits to the LTS platform to relieve the congestion and resolve any impact. As no further issues have been identified this incident will now be closed and any further RCA investigation will be carried out by problem management.

Regarding the problem with a few circuits not able to establish PPP. the report from TalkTalk is as follows: Network support have advised that they have removed HEX (harbour exchange) from the radius to restore service until a permanent fix can be identified. Network support are liaising with Ericsson in regards to this and investigations are ongoing.

Broadband Users Affected 0.20%
Started 07 Jun 2017 10:05:00
Closed 08 Jun 2017 10:49:27

27 Mar 2017 09:30:00
Posted: 19 Feb 2017 18:35:15
We have seen some cases with degraded performance on some TT lines, and we are investigating. Not a lot to go on yet, but be assured we are working on this and engaging the engineers within TT to address this.
21 Feb 2017 10:13:20

We have completed further tests and we are seeing congestion manifesting itself as slow throughput at peak times (evenings and weekends) on VDSL (FTTC) lines that connect to us through a certain Talk Talk LAC.

This has been reported to senior TalkTalk staff.

To explain further; VDSL circuits are routed from TalkTalk to us via two LACs. We are seeing slow thoughput at peak times on one LAC and not the other.

27 Feb 2017 11:08:58
Very often with congestion it is easy to find the network port or system that is overloaded but so far, sadly, we've not found the cause. A&A staff and customers and TalkTalk network engineers have done a lot of checks and tests on various bits of the backhaul network but we are finding it difficult to locate the cause of the slow throughput. We are all still working on this and will update again tomorrow.
27 Feb 2017 13:31:39
We've been in discussions with other TalkTalk wholesalers who have also reported the same problem to TalkTalk. There does seem to be more of a general problem within the TalkTalk network.
27 Feb 2017 13:32:12
We have had an update from TalkTalk saying that based on multiple reports from ISPs that they are investigating further.
27 Feb 2017 23:21:21
Further tests this evening by A&A staff shows that the throughput is not relating to a specific LAC, but that it looks like something in TalkTalk is limiting single TCP sessions to 7-9M max during peak times. Running single iperf tests results in 7-9M, but running ten at the same time can fill a 70M circuit. We've passed these findings on to TalkTalk.
28 Feb 2017 09:29:56
As expected the same iperf throughput tests are working fine this morning. TT are shaping at peak times. We are pursuing this with senior TalkTalk staff.
28 Feb 2017 11:27:45
TalkTalk are investigating. They have stated that circuits should not be rate limited and that they are not intentionally rate limiting. They are still investigating the cause.
28 Feb 2017 13:14:52
Update from TalkTalk: Investigations are currently underway with our NOC team who are liaising with Juniper to determine the root cause of this incident.
01 Mar 2017 16:38:54
TalkTalk are able to reproduce the throughput problem and investigations are still on going.
02 Mar 2017 16:51:12
Some customers did see better throughput on Wednesday evening, but not everyone. We've done some further testing with TalkTalk today and they continue to work on this.
02 Mar 2017 22:42:27
We've been in touch with the TalkTalk Network team this evening and have been performing further tests (see https://aastatus.net/2363 ). Investigations are still ongoing, but the work this evening has given a slight clue.
03 Mar 2017 14:24:48
During tests yesterday evening we saw slow throughput when using the Telehouse interconnect and fast (normal) throughput over Harbour Exchange interconnect. Therefore, this morning, we disabled our Telehouse North interconnect. We will carry on running tests over the weekend and we welcome customers to do the same. We are expecting throughput to but fast for everyone. We will then liaise with TalkTalk engineers regarding this on Monday.
06 Mar 2017 15:39:33

Tests over the weekend suggest that speeds are good when we only use our Harbour Exchange interconnect.

TalkTalk are moving the interconnect we have at Telehouse to a different port at their side so as to rule out a possible hardware fault.

06 Mar 2017 16:38:28
TalkTalk have moved our THN port and we will be re-testing this evening. This may cause some TalkTalk customers to experience slow (single thread) downloads this evening. See: https://aastatus.net/2364 for the planned work notice.
06 Mar 2017 21:39:55
The testing has been completed, and sadly we still see slow speeds when using the THN interconnect. We are now back to using the Harbour Exchange interconnect where we are seeing fast speeds as usual.
08 Mar 2017 12:30:25
Further testing happening today: Thursday evening https://aastatus.net/2366 This is to try and help narrow down where the problem is occurring.
09 Mar 2017 23:23:13
We've been testing, tis evening, this time with some more customers, so thank you to those who have been assisting. (We'd welcome more customers to be involved - you just need to run an iperf server on IPv4 or IPv6 and let one of our IPs through your firewall - contact Andrew if you're interested). We'll be passing the results on to TalkTalk, and the investigation continues.
10 Mar 2017 15:13:43
Last night we saw some line slow and some line fast, so having extra lines to test against should help in figuring out why this is the case. Quite a few customers have set up iperf server for us and we are now testing 20+ lines. (Still happy to add more). Speed tests are being run three times an hour and we'll collate the results after the weekend and will report back to TalkTalk the findings.
11 Mar 2017 20:10:21
13 Mar 2017 15:22:43

We now have samples of lines which are affected by the slow throughput and those that are not.

Since 9pm Sunday we are using the Harbour Exchange interconnect in to TalkTalk and so all customers should be seeing fast speeds.

This is still being investigated by us and TalkTalk staff. We may do some more testing in the evenings this week and we are continuing to run iperf tests against the customers who have contacted us.
14 Mar 2017 15:59:18

TalkTalk are doing some work this evening and will be reporting back to us tomorrow. We are also going to be carrying out some tests ourselves this evening too.

Our tests will require us to move traffic over to the Telehouse interconnect, which may mean some customers will see slow (single thread) download speeds at times. This will be between 9pm and 11pm

14 Mar 2017 16:45:49
This is from the weekend:

17 Mar 2017 10:42:28
We've stopped the iperf testing for the time being. We will start it back up again once we or TalkTalk have made changes that require testing to see if things are better or not, but at the moment there is no need for the testing as all customers should be seeing fast speeds due to the Telehouse interconnect not being in use. Customers who would like quota top-ups, please do email in.
17 Mar 2017 18:10:41
To help with the investigations, we're also asking for customers with BT connected FTTC/VDSL lines to run iperf so we can test against them too - details on https://support.aa.net.uk/TTiperf Thank you!
20 Mar 2017 12:54:02
Thanks to those who have set up iperf for us to test against. We ran some tests over the weekend whilst swapping back to the Telehouse interconnect, and tested BT and TT circuits for comparison. Results are that around half the TT lines slowed down but the BT circuits were unaffected.

TalkTalk are arranging some further tests to be done with us which will happen Monday or Tuesday evening this week.

22 Mar 2017 09:37:30
We have scheduled testing of our Telehouse interlink with TalkTalk staff for this Thursday evening. This will not affect customers in any way.
22 Mar 2017 09:44:09
In addition to the interconnect testing on Thursday mentioned above, TalkTalk have also asked us to retest DSL circuits to see if they are still slow. We will perform these tests this tonnight, Wednesday evening.

TT have confirmed that they have made a configuration change on the switch at their end in Telehouse - this is the reason for the speed testing this evening.

22 Mar 2017 12:06:50
We'll be running iperf3 tests against our TT and BT volunteers this evening, very 15 minutes from 4pm through to midnight.
22 Mar 2017 17:40:20
We'll be changing over to the Telehouse interconnect between 8pm and 9pm this evening for testing.
23 Mar 2017 10:36:06

Here are the results from last night:

And BT Circuits:

Some of the results are rather up and down, but these lines are in use by customers so we would expect some fluctuations, but it's clear that a number of lines are unaffected and a number are affected.

Here's the interesting part. Since this problem started we have rolled out some extra logging on to our LNSs, this has taken some time as we only update one a day. However, we are now logging the IP address used at our side of L2TP tunnels from TalkTalk. We have eight live LNSs and each one has 16 IP addresses that are used. With this logging we've identified that circuits connecting over tunnels on 'odd' IPs are fast, whilst those on tunnels on 'even' IPs are slow. This points to a LAG issue within TalkTalk, which is what we have suspected from the start but this data should hopefully help TalkTalk with their investigations.

23 Mar 2017 16:27:28
As mentioned above, we have scheduled testing of our Telehouse interlink with TalkTalk staff for this evening. This will not affect customers in any way.
23 Mar 2017 22:28:53

We have been testing the Telehouse interconnect this evening with TalkTalk engineers. This involved a ~80 minute conference call and setting up a very simple test of a server our side plugged in to the switch which is connected to our 10G interconnect, and running iperf3 tests against a laptop on the TalkTalk side.

The test has highlighted a problem at the TalkTalk end with the connection between two of their switches. When plugged in to the second switch we got about 300Mbit/s, but when their laptop was in the switch directly connected to our interconnect we got near full speed or around 900Mb/s.

This has hopefully given them a big clue and they will now involve the switch vendor for further investigations.

23 Mar 2017 23:02:34
TalkTalk have just called us back and have asked us to retest speeds on broadband circuits. We're moving traffic over to the Telehouse interconnect and will test....
23 Mar 2017 23:07:31
Initial reports show that speeds are back to normal! Hooray! We've asked TalkTalk for more details and if this is a temporary or permanent fix.
24 Mar 2017 09:22:13

Results from last night when we changed over to test the Telehouse interlink:

This shows that unlike the previous times, when we changed over to use the Telehouse interconnect at 11PM speeds did not drop.

We will perform hourly iperf tests over the weekend to be sure that this has been fixed.

We're still awaiting details from TalkTalk as to what the fix was and if it is a temporary or permanent fix.

24 Mar 2017 16:40:24
We are running on the Telehouse interconnect and are running hourly iperf3 tests against a number of our customers over the weekend. This will tell us if the speed issues are fixed.
27 Mar 2017 09:37:12

Speed tests against customers over the weekend do not show the peak time slow downs, this confrims that what TalkTalk did on Thursday night has fixed the problem. We are still awaiting the report from TalkTalk regarding this incident.

The graph above shows iperf3 speed test results taken once an hour over the weekend against nearly 30 customers. Although some are a bit spiky we are no longer seeing the drastic reduction in speeds at peak time. The spikyness is due to the lines being used as normal by the customers and so is expected.

28 Mar 2017 10:52:25
We're expecting the report from TalkTalk at the end of this week or early next week (w/b 2017-04-03).
10 Apr 2017 16:43:03
We've not yet had the report from TalkTalk, but we do expect it soon...
04 May 2017 09:16:33
We've had an update saying: "The trigger & root cause of this problem is still un-explained; however investigations are continuing between our IP Operation engineers and vendor".

This testing is planned for 16th May.

Resolution From TT: Planned work took place on the 16th May which appears to have been a success. IP Ops engineers swapped the FPC 5 and a 10 gig module on the ldn-vc1.thn device They also performed a full reload to the entire virtual chassis (as planned). This appears to have resolved the slow speed issues seen by the iperf testing onsite. Prior to this IP ops were seeing consistent slow speeds with egress traffic sourced from FPC5 to any other FPC; therefore they are confident that this has now been fixed. IP Ops have moved A&A's port back to FPC 5 on LDN-vc1.thn.
Started 18 Feb 2017
Closed 27 Mar 2017 09:30:00
Cause TT

17 May 2017 12:00:00
Posted: 26 Apr 2017 11:01:07
We have noticed packetloss between 8pm and 10pm on Tuesday (25th April) evening on a small number of TalkTalk connected lines. This may be related to TalkTalk maintenance. We will review this again tomorrow.
26 Apr 2017 16:41:43
We are seeing packet loss this afternoon on some of these lines too. We are contacting TalkTalk.
26 Apr 2017 16:44:23
26 Apr 2017 17:58:06
We have moved TalkTalk traffic over to our Harbour Exchange interconnect to see if this makes a difference or not to the packet loss that we are seeing...
26 Apr 2017 20:50:41
Moving the traffic made no difference. We've had calls with TalkTalk and they have opened an incident and are investigating further.
26 Apr 2017 20:55:14
The pattern that we are seeing relates to which LAC TT are using to send traffic over to us. TT use two LACs at their end, and lines via one have loss whilst lines via the other have no loss.
26 Apr 2017 21:32:30
Another example, showing the loss this evening:

26 Apr 2017 22:26:50
TalkTalk have updated us with: "An issue has been reported that some Partners are currently experiencing quality of service issues, such as slow speed and package (SIC) loss, with their Broadband service. From initial investigations the NOC engineers have identified congestion to the core network connecting to Telehouse North as being the possible cause. This is impacting Partners across the network and not specific to one region, and the impacted volume cannot be determine at present. Preliminary investigations are underway with our NOC and Network Support engineers to determine the root cause of this network incident. At this stage we are unable to issue an ERT until the engineers have completed further diagnostics."
28 Apr 2017 09:43:43
Despite Talktalk thinking they had fixed this we are still seeing packetloss on these circuits between 8pm and 10pm. It's not as much packetloss as we saw on Wednesday evening, but loss nonetheless. This has been reported back to TalkTalk.
04 May 2017 15:42:04
We are now blocking the two affected Talkalk LACs on new connections, eg a PPP re-connect. This means that it will take a bit longer for a line to re-connect (depending upon the broadband router perhaps a minute or two).

This does mean that lines will not be on the LACs which have evening packetloss. We hope not to have to keep this blocking in place for very long as we hope TalkTalk fix this soon.

05 May 2017 16:48:13
We've decided to not block the LACs that are showing packetloss as it was causing connection problems for a few customers. We have had a telephone call with TalkTalk today, and this issue is being escalated with TalkTalk.
05 May 2017 19:47:37
We've had this update from TalkTalk today:

"I have had confirmation from our NOC that following further investigations by our IP Operations team an potential issue has been identified on our LTS (processes running higher than normal). After working with our vendor it has been recommended a card switch-over should resolve this.

This has been scheduled for 16th May. We will post further details next week.

08 May 2017 09:26:21
Planned work has been scheduled for 16th May for this, details and updates of this work is on https://aastatus.net/2386
05 Jun 2017 11:49:24
The planned work took place on the 16th May which appears to have been a success.
Broadband Users Affected 20%
Started 26 Apr 2017 10:00:00
Closed 17 May 2017 12:00:00

02 Jun 2017 16:44:00
Posted: 02 Jun 2017 12:48:45
We have had several customers notify us that they've having connectivity issues in the Cambridge area, all FTTC customers so far, where TCP packets larger than 1300 bytes appear to be dropped. ICMP appears unaffected.
We are currently in the process of reporting this to BT and will post further updates as they become available.
02 Jun 2017 13:08:24
Proactive have raised a fault, AF-AR-OP-3774655 This has also been discovered to affect FTTP customers, which makes sense as they use the same backbone infrastructure as the FTTC customers.
02 Jun 2017 13:35:22
Several customers are reporting that their lines are now performing as expected after restarting their PPP session, so it may be worth restarting your PPP session and letting us know what happens when you try that.
We're still awaiting an update from proactive.
Resolution We have been advised by BT that a link between Milton Keynes and Peterborough was erroring, and was taken out of service to resolve the issue earlier today.
Broadband Users Affected 0.20%
Started 02 Jun 2017 11:47:00
Closed 02 Jun 2017 16:44:00
Cause BT

17 May 2017 12:30:00
Posted: 17 May 2017 09:10:40
17 May 2017 09:30:01
Affected Customers went offline at 02:12 this morning. Further information: These exchanges are off line due to 20 tubes of fibre being damaged by the install of a retail advertising board on the A118 Southern [South?] Kings Rd. Notifications from TalkTalk were expecting service to be restored by 9am, but due to the nature of the fibre break it may well take longer to fix.
17 May 2017 10:31:29
Update from TalkTalk: Virgin media have advised that work to restore service is ongoing but due to the extent of the damage this is taking longer than expected. In parallel our [TalkTalk] NOC are investigating if the Total Loss traffic can be re-routed. We will provide a further update as soon as more information is available.
17 May 2017 10:55:56
From TalkTalk: Virgin Media have advised restoration work has completed on the majority of the damaged fibre, our NOC team have also confirmed a number of exchanges are now up and back in service. The exchanges that are now back in service are Muswell Hill, Ingrebourne, Loughton and Bowes Park.
17 May 2017 11:16:28
From TalkTalk: Our NOC team have advised a number of exchanges are now in service. These are Muswell Hill, Bowes Park, Loughton, Ingrebourne, Bowes Park, Chingford, Highams Park, Leytonstone, Stratford and Upton Park.
17 May 2017 11:30:54
That said, we are still seeing lines on exchanges mentioned above as being offline....
17 May 2017 12:11:20
No further updates as yet.
Resolution It looks like most, if not all, of our affected lines are now back online. Update from TalkTalk: Virgin Media have advised 5 of the 8 impacted fibre tubes have been successfully spliced and their engineers are still on site restoring service to the remaining cables
Started 17 May 2017 01:54:00 by AA Staff
Closed 17 May 2017 12:30:00

04 May 2017 18:00:00
Posted: 04 May 2017 16:53:39
Some TT lines blipped at 16:34 and 16:46. It appears that the lines have recovered. We have reported this to TT.
05 May 2017 08:48:41
This was caused by an incident in the TalkTalk network. This is from TalkTalk: "...Network support have advised that the problem was caused by the card failure which has now been taken offline..."
Started 04 May 2017 16:36:27 by AAISP automated checking
Closed 04 May 2017 18:00:00
Cause TT