Order posts by limited to posts

18 May 16:30:59
Details
7 Feb 14:32:32

We are seeing issues with IPv6 on a few VDSL cabinets serving our customers. There is no apparent geographical commonality amongst these, as far as we can tell.

Lines pass IPv4 fine, but only intermittently passing IPv6 TCP/UDP for brief amounts of time, usually 4 or so packets, before breaking. Customers have tried BT modem, Asus modem, and our supplied ZyXEL as a modem and router, no difference on any. We also lent them a FireBrick to do some traffic dumps.

Traffic captures at our end and the customer end show that the IPv6 TCP and UDP packets are leaving us but not reaching the customer. ICMP (eg pings) do work.

The first case was reported to us in August 2016, and it has taken a while to get to this point. Until very recently there was only a single reported case. Now that we have four cases we have a bit more information and are able to look at commonalities between them.

Of these circuits, two are serving customers via TalkTalk and two are serving customers via BT backhaul. So this isn't a "carrier network issue", as far as we can make out. The only thing that we can find that is common is that the cabinets are all ECI. (Actually - one of the BT connected customers has migrated to TalkTalk backhaul (still with us, using the same cabinet and phone line etc) and the IPv6 bug has also moved to the new circuit via TalkTalk as the backhaul provider)

We are working with senior TalkTalk engineers to try to perform a traffic capture at the exchange - at the point the traffic leaves TalkTalk equipment and is passed on to Openreach - this will show if the packets are making it that far and will help in pinning down the point at which packets are being lost. Understandably this requires TalkTalk engineers working out of hours to perform this traffic capture and we're currently waiting for when this will happen.

Update
2 Mar 11:14:48
Packet captures on an affected circuit carried out by TalkTalk have confirmed that this issue most likely lies in the Openreach network. Circuits that we have been made aware of are being pursued with both BT and TalkTalk for Openreach to make further investigations into the issue.
If you believe you may be affected please do contact support.
Update
17 Mar 09:44:00
Having had TalkTalk capture the traffic in the exchange, the next step is to capture traffic at the road-side cabinet. This is being progresses with Openreach and we hope this to happen 'soon'.
Update
29 Mar 09:52:52
We've received an update from BT advising that they have been able to replicate the missing IPv6 packets, this is believed to be a bug which they are pursuing with the vendor.

In the mean time they have also identified a fix which they are working to deploy. We're currently awaiting further details regarding this, and will update this post once further details become known.
Update
18 May 16:30:59
We've been informed that the fix for this issue is currently being tested with Openreach's supplier, but should be released to them on the 25th May. Once released to Openreach, they will then perform internal testing of this before deploying it to their network. We haven't been provided with any estimation of dates for the final deployment of this fix yet.
In the interim, we've had all known affected circuits on TalkTalk backhaul have a linecard swap at the cabinet performed as a workaround, which has restored IPv6 on all TT circuits known to be affected by this issue.
BT have come back to us suggesting that they too have a workaround, so we have requested that it is implemented on all known affected BT circuits to restore IPv6 to the customers known to have this issue on BT backhaul.
Broadband Users Affected 0.05%
Started 7 Feb 09:00:00 by AA Staff
Update was expected Today 18:00:00

8 May 09:26:21
Details
26 Apr 11:01:07
We have noticed packetloss between 8pm and 10pm on Tuesday (25th April) evening on a small number of TalkTalk connected lines. This may be related to TalkTalk maintenance. We will review this again tomorrow.
Update
26 Apr 16:41:43
We are seeing packet loss this afternoon on some of these lines too. We are contacting TalkTalk.
Update
26 Apr 16:44:23
Update
26 Apr 17:58:06
We have moved TalkTalk traffic over to our Harbour Exchange interconnect to see if this makes a difference or not to the packet loss that we are seeing...
Update
26 Apr 20:50:41
Moving the traffic made no difference. We've had calls with TalkTalk and they have opened an incident and are investigating further.
Update
26 Apr 20:55:14
The pattern that we are seeing relates to which LAC TT are using to send traffic over to us. TT use two LACs at their end, and lines via one have loss whilst lines via the other have no loss.
Update
26 Apr 21:32:30
Another example, showing the loss this evening:

Update
26 Apr 22:26:50
TalkTalk have updated us with: "An issue has been reported that some Partners are currently experiencing quality of service issues, such as slow speed and package (SIC) loss, with their Broadband service. From initial investigations the NOC engineers have identified congestion to the core network connecting to Telehouse North as being the possible cause. This is impacting Partners across the network and not specific to one region, and the impacted volume cannot be determine at present. Preliminary investigations are underway with our NOC and Network Support engineers to determine the root cause of this network incident. At this stage we are unable to issue an ERT until the engineers have completed further diagnostics."
Update
28 Apr 09:43:43
Despite Talktalk thinking they had fixed this we are still seeing packetloss on these circuits between 8pm and 10pm. It's not as much packetloss as we saw on Wednesday evening, but loss nonetheless. This has been reported back to TalkTalk.
Update
4 May 15:42:04
We are now blocking the two affected Talkalk LACs on new connections, eg a PPP re-connect. This means that it will take a bit longer for a line to re-connect (depending upon the broadband router perhaps a minute or two).

This does mean that lines will not be on the LACs which have evening packetloss. We hope not to have to keep this blocking in place for very long as we hope TalkTalk fix this soon.

Update
5 May 16:48:13
We've decided to not block the LACs that are showing packetloss as it was causing connection problems for a few customers. We have had a telephone call with TalkTalk today, and this issue is being escalated with TalkTalk.
Update
5 May 19:47:37
We've had this update from TalkTalk today:

"I have had confirmation from our NOC that following further investigations by our IP Operations team an potential issue has been identified on our LTS (processes running higher than normal). After working with our vendor it has been recommended a card switch-over should resolve this.

This has been scheduled for 16th May. We will post further details next week.

Update
8 May 09:26:21
Planned work has been scheduled for 16th May for this, details and updates of this work is on https://aastatus.net/2386
Broadband Users Affected 20%
Started 26 Apr 10:00:00
Update was expected 16 May 12:00:00

3 May 13:20:48
Details
26 Apr 10:16:02
We have identified packet loss across our lines at MOSS SIDE that occurs between 8pm to 10pm. We have raised this with BT who suggest that they hope to have this resolved on May 30th. We will update you on completion of this work.
Broadband Users Affected 0.15%
Started 26 Apr 10:13:41 by BT
Update expected 30 May 12:00:00
Expected close 30 May (Estimated Resolution Time from BT)

9 Mar 20:00:00
Details
8 Mar 12:29:14

We continue to work with TalkTalk to get to the bottom of the slow throughput issue as described on https://aastatus.net/2358

We will be performing some routing changes and tests this afternoon and this evening, we are not expecting this to cause any drops for customers, but this evening there will be times when throughput for 'single thread' downloads will be slow. Sorry for the short notice, please bear with us, this is being a tricky fault to track down.

Update
8 Mar 22:39:39
Sorry, due to TalkTalk needing extra time to prepare for their changes this work has been moved to Thursday 9th evening.
Started 9 Mar 20:00:00
Update was expected 9 Mar 23:00:00

17 May 12:30:00
Details
17 May 09:10:40
We have identified outages within the North London area affecting multiple exchanges. The affected exchanges are listed as: SHOEBURYNESS,THORPE BAY, CANVEY ISLAND, HADLEIGH – ESSEX, MARINE, NORTH BENFLEET, STANFORD LE HOPE, VANGE, WICKFORD, BLOOMSBURY AKA HOWLAND ST, HOLBORN, PRIMROSE HILL, KINGSLAND GREEN, TOTTENHAM, BOWES PARK, PALMERS GREEN, WALTHAM CROSS, WINCHMORE HILL, EDMONTON, NEW SOUTHGATE, EPPING, HAINAULT, ILFORD NORTH, ROMFORD, UPMINSTER, NORTH WEALD, STAMFORD HILL, DAGENHAM, ILFORD CENTRAL, GOODMAYES, STRATFORD, HIGHAMS PARK, LEYTONSTONE, WALTHAMSTOW, CHINGFORD, KENTISH TOWN and MUSWELL HILL. No root cause has been identified. We will update this status page as the updates become available from our supplier.
Update
17 May 09:30:01
Affected Customers went offline at 02:12 this morning. Further information: These exchanges are off line due to 20 tubes of fibre being damaged by the install of a retail advertising board on the A118 Southern [South?] Kings Rd. Notifications from TalkTalk were expecting service to be restored by 9am, but due to the nature of the fibre break it may well take longer to fix.
Update
17 May 10:31:29
Update from TalkTalk: Virgin media have advised that work to restore service is ongoing but due to the extent of the damage this is taking longer than expected. In parallel our [TalkTalk] NOC are investigating if the Total Loss traffic can be re-routed. We will provide a further update as soon as more information is available.
Update
17 May 10:55:56
From TalkTalk: Virgin Media have advised restoration work has completed on the majority of the damaged fibre, our NOC team have also confirmed a number of exchanges are now up and back in service. The exchanges that are now back in service are Muswell Hill, Ingrebourne, Loughton and Bowes Park.
Update
17 May 11:16:28
From TalkTalk: Our NOC team have advised a number of exchanges are now in service. These are Muswell Hill, Bowes Park, Loughton, Ingrebourne, Bowes Park, Chingford, Highams Park, Leytonstone, Stratford and Upton Park.
Update
17 May 11:30:54
That said, we are still seeing lines on exchanges mentioned above as being offline....
Update
17 May 12:11:20
No further updates as yet.
Resolution It looks like most, if not all, of our affected lines are now back online. Update from TalkTalk: Virgin Media have advised 5 of the 8 impacted fibre tubes have been successfully spliced and their engineers are still on site restoring service to the remaining cables
Started 17 May 01:54:00 by AA Staff
Closed 17 May 12:30:00

4 May 18:00:00
[Broadband] - TT Blips - Closed
Details
4 May 16:53:39
Some TT lines blipped at 16:34 and 16:46. It appears that the lines have recovered. We have reported this to TT.
Update
5 May 08:48:41
This was caused by an incident in the TalkTalk network. This is from TalkTalk: "...Network support have advised that the problem was caused by the card failure which has now been taken offline..."
Started 4 May 16:36:27 by AAISP automated checking
Closed 4 May 18:00:00
Cause TT

27 Mar 09:30:00
Details
19 Feb 18:35:15
We have seen some cases with degraded performance on some TT lines, and we are investigating. Not a lot to go on yet, but be assured we are working on this and engaging the engineers within TT to address this.
Update
21 Feb 10:13:20

We have completed further tests and we are seeing congestion manifesting itself as slow throughput at peak times (evenings and weekends) on VDSL (FTTC) lines that connect to us through a certain Talk Talk LAC.

This has been reported to senior TalkTalk staff.

To explain further; VDSL circuits are routed from TalkTalk to us via two LACs. We are seeing slow thoughput at peak times on one LAC and not the other.

Update
27 Feb 11:08:58
Very often with congestion it is easy to find the network port or system that is overloaded but so far, sadly, we've not found the cause. A&A staff and customers and TalkTalk network engineers have done a lot of checks and tests on various bits of the backhaul network but we are finding it difficult to locate the cause of the slow throughput. We are all still working on this and will update again tomorrow.
Update
27 Feb 13:31:39
We've been in discussions with other TalkTalk wholesalers who have also reported the same problem to TalkTalk. There does seem to be more of a general problem within the TalkTalk network.
Update
27 Feb 13:32:12
We have had an update from TalkTalk saying that based on multiple reports from ISPs that they are investigating further.
Update
27 Feb 23:21:21
Further tests this evening by A&A staff shows that the throughput is not relating to a specific LAC, but that it looks like something in TalkTalk is limiting single TCP sessions to 7-9M max during peak times. Running single iperf tests results in 7-9M, but running ten at the same time can fill a 70M circuit. We've passed these findings on to TalkTalk.
Update
28 Feb 09:29:56
As expected the same iperf throughput tests are working fine this morning. TT are shaping at peak times. We are pursuing this with senior TalkTalk staff.
Update
28 Feb 11:27:45
TalkTalk are investigating. They have stated that circuits should not be rate limited and that they are not intentionally rate limiting. They are still investigating the cause.
Update
28 Feb 13:14:52
Update from TalkTalk: Investigations are currently underway with our NOC team who are liaising with Juniper to determine the root cause of this incident.
Update
1 Mar 16:38:54
TalkTalk are able to reproduce the throughput problem and investigations are still on going.
Update
2 Mar 16:51:12
Some customers did see better throughput on Wednesday evening, but not everyone. We've done some further testing with TalkTalk today and they continue to work on this.
Update
2 Mar 22:42:27
We've been in touch with the TalkTalk Network team this evening and have been performing further tests (see https://aastatus.net/2363 ). Investigations are still ongoing, but the work this evening has given a slight clue.
Update
3 Mar 14:24:48
During tests yesterday evening we saw slow throughput when using the Telehouse interconnect and fast (normal) throughput over Harbour Exchange interconnect. Therefore, this morning, we disabled our Telehouse North interconnect. We will carry on running tests over the weekend and we welcome customers to do the same. We are expecting throughput to but fast for everyone. We will then liaise with TalkTalk engineers regarding this on Monday.
Update
6 Mar 15:39:33

Tests over the weekend suggest that speeds are good when we only use our Harbour Exchange interconnect.

TalkTalk are moving the interconnect we have at Telehouse to a different port at their side so as to rule out a possible hardware fault.

Update
6 Mar 16:38:28
TalkTalk have moved our THN port and we will be re-testing this evening. This may cause some TalkTalk customers to experience slow (single thread) downloads this evening. See: https://aastatus.net/2364 for the planned work notice.
Update
6 Mar 21:39:55
The testing has been completed, and sadly we still see slow speeds when using the THN interconnect. We are now back to using the Harbour Exchange interconnect where we are seeing fast speeds as usual.
Update
8 Mar 12:30:25
Further testing happening today: Thursday evening https://aastatus.net/2366 This is to try and help narrow down where the problem is occurring.
Update
9 Mar 23:23:13
We've been testing, tis evening, this time with some more customers, so thank you to those who have been assisting. (We'd welcome more customers to be involved - you just need to run an iperf server on IPv4 or IPv6 and let one of our IPs through your firewall - contact Andrew if you're interested). We'll be passing the results on to TalkTalk, and the investigation continues.
Update
10 Mar 15:13:43
Last night we saw some line slow and some line fast, so having extra lines to test against should help in figuring out why this is the case. Quite a few customers have set up iperf server for us and we are now testing 20+ lines. (Still happy to add more). Speed tests are being run three times an hour and we'll collate the results after the weekend and will report back to TalkTalk the findings.
Update
11 Mar 20:10:21
Update
13 Mar 15:22:43

We now have samples of lines which are affected by the slow throughput and those that are not.

Since 9pm Sunday we are using the Harbour Exchange interconnect in to TalkTalk and so all customers should be seeing fast speeds.

This is still being investigated by us and TalkTalk staff. We may do some more testing in the evenings this week and we are continuing to run iperf tests against the customers who have contacted us.
Update
14 Mar 15:59:18

TalkTalk are doing some work this evening and will be reporting back to us tomorrow. We are also going to be carrying out some tests ourselves this evening too.

Our tests will require us to move traffic over to the Telehouse interconnect, which may mean some customers will see slow (single thread) download speeds at times. This will be between 9pm and 11pm

Update
14 Mar 16:45:49
This is from the weekend:

Update
17 Mar 10:42:28
We've stopped the iperf testing for the time being. We will start it back up again once we or TalkTalk have made changes that require testing to see if things are better or not, but at the moment there is no need for the testing as all customers should be seeing fast speeds due to the Telehouse interconnect not being in use. Customers who would like quota top-ups, please do email in.
Update
17 Mar 18:10:41
To help with the investigations, we're also asking for customers with BT connected FTTC/VDSL lines to run iperf so we can test against them too - details on https://support.aa.net.uk/TTiperf Thank you!
Update
20 Mar 12:54:02
Thanks to those who have set up iperf for us to test against. We ran some tests over the weekend whilst swapping back to the Telehouse interconnect, and tested BT and TT circuits for comparison. Results are that around half the TT lines slowed down but the BT circuits were unaffected.

TalkTalk are arranging some further tests to be done with us which will happen Monday or Tuesday evening this week.

Update
22 Mar 09:37:30
We have scheduled testing of our Telehouse interlink with TalkTalk staff for this Thursday evening. This will not affect customers in any way.
Update
22 Mar 09:44:09
In addition to the interconnect testing on Thursday mentioned above, TalkTalk have also asked us to retest DSL circuits to see if they are still slow. We will perform these tests this tonnight, Wednesday evening.

TT have confirmed that they have made a configuration change on the switch at their end in Telehouse - this is the reason for the speed testing this evening.

Update
22 Mar 12:06:50
We'll be running iperf3 tests against our TT and BT volunteers this evening, very 15 minutes from 4pm through to midnight.
Update
22 Mar 17:40:20
We'll be changing over to the Telehouse interconnect between 8pm and 9pm this evening for testing.
Update
23 Mar 10:36:06

Here are the results from last night:

And BT Circuits:

Some of the results are rather up and down, but these lines are in use by customers so we would expect some fluctuations, but it's clear that a number of lines are unaffected and a number are affected.

Here's the interesting part. Since this problem started we have rolled out some extra logging on to our LNSs, this has taken some time as we only update one a day. However, we are now logging the IP address used at our side of L2TP tunnels from TalkTalk. We have eight live LNSs and each one has 16 IP addresses that are used. With this logging we've identified that circuits connecting over tunnels on 'odd' IPs are fast, whilst those on tunnels on 'even' IPs are slow. This points to a LAG issue within TalkTalk, which is what we have suspected from the start but this data should hopefully help TalkTalk with their investigations.

Update
23 Mar 16:27:28
As mentioned above, we have scheduled testing of our Telehouse interlink with TalkTalk staff for this evening. This will not affect customers in any way.
Update
23 Mar 22:28:53

We have been testing the Telehouse interconnect this evening with TalkTalk engineers. This involved a ~80 minute conference call and setting up a very simple test of a server our side plugged in to the switch which is connected to our 10G interconnect, and running iperf3 tests against a laptop on the TalkTalk side.

The test has highlighted a problem at the TalkTalk end with the connection between two of their switches. When plugged in to the second switch we got about 300Mbit/s, but when their laptop was in the switch directly connected to our interconnect we got near full speed or around 900Mb/s.

This has hopefully given them a big clue and they will now involve the switch vendor for further investigations.

Update
23 Mar 23:02:34
TalkTalk have just called us back and have asked us to retest speeds on broadband circuits. We're moving traffic over to the Telehouse interconnect and will test....
Update
23 Mar 23:07:31
Initial reports show that speeds are back to normal! Hooray! We've asked TalkTalk for more details and if this is a temporary or permanent fix.
Update
24 Mar 09:22:13

Results from last night when we changed over to test the Telehouse interlink:

This shows that unlike the previous times, when we changed over to use the Telehouse interconnect at 11PM speeds did not drop.

We will perform hourly iperf tests over the weekend to be sure that this has been fixed.

We're still awaiting details from TalkTalk as to what the fix was and if it is a temporary or permanent fix.

Update
24 Mar 16:40:24
We are running on the Telehouse interconnect and are running hourly iperf3 tests against a number of our customers over the weekend. This will tell us if the speed issues are fixed.
Update
27 Mar 09:37:12

Speed tests against customers over the weekend do not show the peak time slow downs, this confrims that what TalkTalk did on Thursday night has fixed the problem. We are still awaiting the report from TalkTalk regarding this incident.

The graph above shows iperf3 speed test results taken once an hour over the weekend against nearly 30 customers. Although some are a bit spiky we are no longer seeing the drastic reduction in speeds at peak time. The spikyness is due to the lines being used as normal by the customers and so is expected.

Update
28 Mar 10:52:25
We're expecting the report from TalkTalk at the end of this week or early next week (w/b 2017-04-03).
Update
10 Apr 16:43:03
We've not yet had the report from TalkTalk, but we do expect it soon...
Update
4 May 09:16:33
We've had an update saying: "The trigger & root cause of this problem is still un-explained; however investigations are continuing between our IP Operation engineers and vendor".

This testing is planned for 16th May.

Resolution This has been fixed, we're awaiting the full report from TalkTalk.
Started 18 Feb
Closed 27 Mar 09:30:00
Cause TT

17 May 09:10:43
Details
22 Apr 11:12:14

We are making some major upgrades to the way we do notifications to customers for orders, faults, etc.

One of the first things to change will be the up/down notifications we send. These can be tweets, texts or emails. Part of the work will be changing the options you have for controlling these.

Over the coming weeks we expect to add new settings, and remove old settings over time. We are aiming to try and mirror most of the existing functionality, in particular the sending of tweets or texts for unexpected line drops for individual lines.

You will also start to see options to load a PGP public key for emailed notifications.

Over coming months we expect to move more and more of our systems over to the new process, including order tracking and engineer appointments, etc.

The end result should be more flexible, and consistent, notifications.

Update
23 Apr 17:54:32

We are mostly ready to switch over to the new system - probably happening on Monday. The wording of the messages will change slightly with the new system. There are already more options on the control pages, and once we have switched over these will replace the old settings.

Once done, we will start using the new system for more and more aspects of the control pages, ordering, and faults, over the coming weeks.

Update
24 Apr 14:23:06
We have switched the line up/down messages to the new KCI system. Please do let us know if you have any concerns or comments.
Update
24 Apr 16:17:34
Customers can now paste a PGP public key in to the control pages login page.
Update
25 Apr 16:02:39
After a few comments we have made sure the postcode and login is shown in the line/up down messages. The new system seems to be working well, and we are ready to start extending it to more messages soon.
Started 23 Apr
Closed 17 May 09:10:43
Expected close 1 Jun

22 Apr 11:13:59
Details
2 Feb 21:19:15
http://www.euronews.com/2017/01/27/adrian-kennard-challenging-surveillance

21 Apr 13:07:49
Details
21 Apr 09:59:57
At 9:30 we saw a number of TalkTalk connected circuits drop and reconnect. We're investigating, but seems like a TalkTalk backhaul issue

Update
21 Apr 11:24:54
Most lines logged back in very quickly. A few are still down. It seems TalkTalk had some sort if incident at the Telehouse Harbour Exchange datacentre, we're expecting further information shortly.
Update
21 Apr 12:54:00
TalkTalk have some sort of outage in their Harbour Exchange datacentre. As we have an interconnect in both Telehouse and Harbour Exchange the traffic has moved over to Telehouse. There are a very small number of customers (<10) who are still offline, they may just need a reboot of their router or modem, but we are contacting them individually.
Update
21 Apr 13:08:47
TalkTalk say: Root cause analysis conducted by the NOC and our Network Support team has identified that this incident was caused by a crashed routing card for the LTS. The card was reloaded by our Network Support team and full service has now been restored.
Resolution Any customer still offline, please reboot your router/modem and then contact Support if still off.
Started 21 Apr 09:30:50
Closed 21 Apr 13:07:49

14 Mar 21:10:00
Details
14 Mar 21:05:28
Looks like we just had some sort of blip affecting broadband customers. We're investigating.
Resolution This was a LNS crash, and so affected customers on the "i" LNS. The cause is being investigated, but preliminary investigations show that it's probably a problem that is fixed in software that is scheduled to be loaded on to this LNS in a couple of days time as part of the rolling software update that we're performing at the moment.
Broadband Users Affected 12%
Started 14 Mar 21:00:57
Closed 14 Mar 21:10:00

6 Mar 21:37:45
Details
6 Mar 16:41:32
As part of the slow throughput problem described in https://aastatus.net/2358 we will be performing further tests this evening. This will involve moving TalkTalk traffic to the interconnect which we believe is slow. Customers may see poor speeds this evening during the times that we carry out tests. The tests are expected to last less than 30 minutes between 8 and 10 pm.
Resolution This work has been completed.
Started 6 Mar 20:00:00
Closed 6 Mar 21:37:45

2 Mar 22:10:44
Details
2 Mar 21:48:39
Relating to https://aastatus.net/2358 we are undergoing currently in an emergency at-risk period as we perform some tests along side TalkTalk staff. We don't expect any problems, but this work involves re-routing TalkTalk traffic within our network. This work is happening now. Sorry for the no notice.
Update
2 Mar 21:53:05
We have successfully and cleanly moved all TalkTalk traffic off our THN interconnect and on to our HEX Interconnect. (Usually we use both all the time, but for this testing we are forcing traffic through the HEX side)
Update
2 Mar 21:55:52
We're bringing back routing across both links now...
Update
2 Mar 22:03:40
We are now moving traffic to our THN interconnect.
Resolution We're now back to using both the TalkTalk links. Tests completed.
Started 2 Mar 21:46:17
Closed 2 Mar 22:10:44

16 Feb 15:00:00
Details
16 Feb 16:00:49
We have spotted some odd latency that was affecting two of our LNSs (A and B gormless). These were also visible, as you would expect, on the graphs shown for people's lines.
Resolution We believe we have addressed the issue now, sorry for any inconvenience.
Started 15 Feb 02:00:00
Closed 16 Feb 15:00:00
Previously expected 16 Feb 15:00:00

13 Feb 10:02:12
[Broadband] - LNS blip - Closed
Details
13 Feb 10:00:36
We just had an LNS blip - this would have caused some customers to drop PPP and reconnect.
Resolution There have been a few LNS blips recently. However, we do know the cause and have a software update to roll out which will fix the problem.
Started 13 Feb 09:56:00
Closed 13 Feb 10:02:12

4 Feb 09:32:03
[Broadband] - LNS blip - Closed
Details
4 Feb 09:14:11
We had an LNS reset and lines will have re-connected for some customers. We're investigating the cause.
Resolution We have found the cause, and expect a permanent fix to be deployed on next round of LNS upgrades.
Broadband Users Affected 12%
Started 4 Feb 09:12:00
Closed 4 Feb 09:32:03

31 Jan 16:29:00
Details
31 Jan 16:24:03
Customers on one of our LNSs just lost their connection and would have logged back in again shortly after. We're investigating the cause
Update
31 Jan 16:41:32
Customers are back online. The CQM graphs for the day would have been lost for these lines. We do apologise for the inconvenience this caused.
Broadband Users Affected 12%
Started 31 Jan 16:16:00
Closed 31 Jan 16:29:00

24 Jan 18:15:00
Details
24 Jan 16:11:45
Some TalkTalk connected customers have high packetloss on their lines from around 3pm today. These lines are in the Chippenham/Bristol area. If affected you'll be experiencing slow speeds.
Update
24 Jan 16:19:23

Affected lines are looking like this. This shows the fault started just after 9am, but from 3pm there is severe packet loss.

Update
24 Jan 18:32:37
TalkTalk say "NOC & Network engineering are currently investigating congestion and packet loss across the core network." More details to follow.
Update
24 Jan 18:45:58
Problem looks fixed as of 18:15
Update
25 Jan 08:48:01
(This also affected some other circuits in other parts of the country.)
Resolution From TalkTalk: Root cause has not currently been identified.. The (TalkTalk) NOC engaged Network Support, who investigated and added a new link in order to alleviate congestion. The B2B Enterprise team are currently retesting with the affected customers and initial feedback indicates that this has resolved the issue
Broadband Users Affected 1%
Started 24 Jan 15:00:00
Closed 24 Jan 18:15:00

23 Jan 21:50:24
Details
23 Jan 21:17:18
Since 20:23 we're seeing ~20% packet loss on TalkTalk connected VDSL circuits, these customers will be experiencing very slow speeds. These are in the SALTERTON/DORCHESTER/WESTBOURNE/CRADDOCK area. We have contacted TalkTalk regarding this.
Update
23 Jan 21:50:48
This looks to have been fixed.
Resolution This was due to a card failure at Yeovil
Started 23 Jan 20:23:00
Closed 23 Jan 21:50:24
Cause TT

24 Jan
Details
23 Jan 08:21:07
Sorry to say that the new LNSs (H and I) were not archiving graphs and so the CQM graphs for customers on these LNSs have not been recorded.
Resolution Fixed
Started 16 Jan
Closed 24 Jan
Previously expected 24 Jan

18 Jan 20:30:00
Details
18 Jan 20:36:56
We're looking in to why some broadband lines and mobile SIMs dropped and reconnected at around 20:30 this evening....
Resolution Lines are back online, most reconnected within a few minutes. This blip affected about 1/8th of our customers, and was caused by one of our LNS restarting unexpectedly. We do apologise for the inconvenience this caused. We'll be investigating the cause of this.
Started 18 Jan 20:35:58
Closed 18 Jan 20:30:00
Cause LNS restart/crash

17 Jan 09:48:47
Details
17 Jan 08:35:28
Once again we are seeing an issue where TT lines are failing to connect. This is not impacting lines that are currently connected unless they drop and reconnect for some reason. This looks like only half of TTs LACs that is impacted, and so lines are eventually reconnecting after several tries. It has been reported to TalkTalk and we will update this post as soon as we get an update.
Update
17 Jan 09:50:18
All affected lines appear to have reconnected.
Resolution We are still investigating the root cause
Broadband Users Affected 1%
Started 17 Jan 01:00:00
Closed 17 Jan 09:48:47
Previously expected 17 Jan 12:31:59

17 May 09:02:44
Details
8 May 09:24:13

This is related to TalkTalk packet loss in the evenings, https://aastatus.net/2382

From TalkTalk: "Following further investigations by our IP Operations team an potential issue has been identified on [one of] our LTS (processes running higher than normal). After working with our vendor it has been recommended a card switch-over should resolve this."

We have not been given the exact times yet, but we expect it to be in the early hours of 16th May. We will work with TalkTalk to minimise the affect this has on our customers but it may ben some connections drop and reconnect during this period of work.

We will update this post when we receive more details.

Update
12 May 16:49:56

We are now expecting this works to happen on the evening of the 16th and the early hours of the 17th May.

We will move traffic away from Telehouse, where TalkTalk are doing there work so as to minimise the number of customers who will be affected.

We are expecting this work to cause some disruption, in that circuits will be disconnected and will reconnect a couple of times during this work.

Update
17 May 03:17:57
The planned work by TalkTalk tonight caused a few short PPP disconnects for some customers, eg at 00:05, 01:02, 01:30 and some were offline for around 30 mins between 01:30 and 02:00. A further drop happened at 03:10 with line reconnecting from 03:16 at the time of writing this post.
Resolution This work has been completed and is now closed. We do apologise to those customers who were affected by this.
Closed 17 May 09:02:44
Previously expected 16 May 00:05:00