Order posts by limited to posts

3 Nov 20:55:02
Details
Posted: 3 Nov 19:56:18
We had an unexpected reset of the 'B.Gormless' LNS at 19:45. This caused customers on this LNS to be disconnected. Most have reconnected now though.
Resolution This LNS has reset a few times in the past few months. We'll look at replacing the hardware.
Started 3 Nov 19:45:00
Closed 3 Nov 20:55:02

10 Oct 20:03:21
Details
Posted: 26 Apr 10:16:02
We have identified packet loss across our lines at MOSS SIDE that occurs between 8pm to 10pm. We have raised this with BT who suggest that they hope to have this resolved on May 30th. We will update you on completion of this work.
Broadband Users Affected 0.15%
Started 26 Apr 10:13:41 by BT
Closed 10 Oct 20:03:21
Previously expected 30 May (Last Estimated Resolution Time from BT)

10 Oct 19:58:12
Details
Posted: 10 Oct 20:00:14
The "B" LNS restarted for some reason, we are looking in to why but this will mean some customers lost PPP connectivity. Obviously customers connected back again as fast as their router will retry - either to backup LNS or the "B" LNS which rebooted within a few seconds. Sorry for any inconvenience.
Started 10 Oct 19:52:00
Closed 10 Oct 19:58:12

27 Sep 13:15:00
Details
Posted: 27 Sep 13:06:43
At 13:01 we saw a number of broadband lines drop and reconnect. We are investigating the cause.
Update
27 Sep 13:18:20
This affected circuits on our 'H' and 'I' LNSs, customers on LNSs 'A' through to 'G' were unaffected.
Started 27 Sep 13:01:00
Closed 27 Sep 13:15:00

17 Sep 10:10:00
Details
Posted: 17 Sep 09:42:04
Latest from TalkTalk: BT advise their engineer is due on site at 08:40 to investigate, and they are still attempting to source a Fibre Precision Test Officer. Our field engineer has been called out and is en route to site (ETA 08:30).
Update
17 Sep 09:43:18
TalkTalk say affected area codes are: 01481, 01223, 01553, 01480, 01787, 01353 and maybe others. ( Impacted exchanges are Barrow, Buntingford, Bottisham, Burwell, Cambridge, Crafts Hill, Cheveley, Clare, Comberton, Costessey, Cherry Hinton, Cottenham, Dereham, Downham Market, Derdingham, Ely, Fakenham, Fordham Cambs, Feltwell, Fulbourn, Great Chesterford, Girton,Haddenham, Histon, Holt, Halstead, Harston, Kentford, Kings Lynn, Lakenheath, Littleport, Madingley, Melbourne, Mattishall, Norwich North, Rorston, Science Park, Swaffham, Steeple Mordon, Soham, Sawston, Sutton, South Wootton, Swavesey, Teversham, Thaxted, Cambridge Trunk, Trumpington, Terrington St Clements, Tittleshall, Willingham, Waterbeach, Watlington, Watton, Buckden, Crowland, Doddington, Eye, Friday Bridge, Glinton, Huntingdon, Long Sutton, Moulton Chapel, Newton Wisbech, Parson Drove, Papworth St Agnes, Ramsey Hunts, Sawtry, Somersham, St Ives, St Neots, Sutton Bridge, Upwell, Warboys, Werrington, Whittlesey, Woolley, Westwood, Yaxley, Ashwell, Gamlingay and Potton. )
Update
17 Sep 09:43:37
TalkTalk say: Our field engineer and BT field engineer have arrived at site with investigations to the root cause now underway. At this stage Incident Management is unable to issue an ERT until the engineers have completed their diagnostics.
Update
17 Sep 09:55:09
Some lines logged back it at around 09:48
Update
17 Sep 10:10:17
Most are back online now.
Resolution From TalkTalk: Our NOC advised that alarms cleared at 09:45 and service has been restored. Our Network Support has raised a case with Axians (vendor) as there appeared to be an issue between the interface cards in the NGE router and the backplane (which facilitates data flow from the interface cards through the NGE). This incident is resolved and will now be closed with any further root cause with the Problem Management process.
Started 17 Sep 06:20:00
Closed 17 Sep 10:10:00

8 Sep 01:00:00
Details
Posted: 7 Sep 23:04:00
Packet loss has been noted to some destinations, routed via LONAP. Our engineers are currently investigating and attempting to work around the loss being observed.
Update
7 Sep 23:23:52
We have disabled all of our LONAP ports for the moment - this reduces our capacity somewhat, but at this time of day the impact to customers is low. We've seen unconfirmed reports that there is some sort of problem with the LONAP peering network, we are still investigating ourselves. (LONAP is a peering exchange in London which connects up lots of ISPs and large internet companies, it's one of the main ways we connect to the rest of the Internet).
Update
7 Sep 23:24:21
LONAP engineers are looking in to this.
Update
7 Sep 23:31:09
We are now not seeing packet loss on the LONAP network - we'll enable our sessions after getting an 'all-clear' update from LONAP staff.
Update
7 Sep 23:39:44
Packet loss on the LONAP network has returned, we still have our sessions down, we're still waiting for the all-clear from LONAP before we enable our sessions again. Customers are on the whole, unaffected by this. There are reports of high latency spikes to certain places, which may or may not be related to what is happening with LONAP at the moment.
Update
8 Sep 06:57:44
We have re-enabled our LONAP sessions.
Resolution The LONAP peering exchange confirm that they had some sort of network problem which was resolved at around 1AM. It's unconfirmed, but the problem looks to be related to some sort of network loop.
Broadband Users Affected 100%
Started 7 Sep 22:42:00 by AA Staff
Closed 8 Sep 01:00:00
Previously expected 8 Sep 03:01:52 (Last Estimated Resolution Time from AAISP)

5 Sep 14:30:00
Details
Posted: 5 Sep 12:01:04
We are seeing very high latency - over 1,000ms on many lines in the East of England. Typically around the Cambridgeshire/Suffolk area. This is affecting BT circuits, TalkTalk circuits are OK. We are investigating further and contacting BT. We suspect this is a failed link within the BT network in the Cambridge area. More details to follow shortly.
Update
5 Sep 12:28:00

Example line graph.
Update
5 Sep 12:35:38
We're currently awaiting a response from BT regarding this.
Update
5 Sep 12:37:16
BT are now actively investigating the fault.
Update
5 Sep 14:04:16
As expected, this is affecting other ISPs who use BT backhaul.
Update
5 Sep 14:23:02
Latest update from BT:- "The Transmission group are investigating further they are carrying out tests on network nodes, As soon as they have identified an issue we will advise you further. We apologies for any inconvenience caused while testing is carried out."
Update
5 Sep 14:34:35
Latency is now back to normal. We will post again when we hear back from BT.
Resolution BT have confirmed that a card in one of their routers was replaced yesterday to resolve this.
Started 5 Sep 11:00:00
Closed 5 Sep 14:30:00

29 Aug 13:00:00
Details
Posted: 17 Jun 15:24:16
We've seen very slight packet loss on a number of TalkTalk connected lines this week in the evenings. This looks to be congestion, it's may show up on our CQM graphs as a few pixels of red at the top of the graph between 7pm and midnight. We have an incident open with TalkTalk. We moved traffic to our Telehouse interconnect on Friday afternoon and Friday evening looked to be better. This may mean that th econgestion is related to TalkTalk in Harbour Exchange, but it's a little too early to tell at the moment. We are monitoring this and will update again after the weekend.
Update
19 Jun 16:49:34

TalkTalk did some work on the Telehouse side of our interconnect on Friday as follows:

"The device AA connect into is a chassis with multiple cards and interfaces creating a virtual switch. The physical interface AA plugged into was changed to another physical interface. We suspect this interface to be faulty as when swapped to another it looks to have resolved the packet loss."

We will be testing both of our interconnects individually over the next couple of days.

Update
20 Jun 10:29:05
TalkTalk are doing some work on our Harbour Exchange side today. Much like the work they did on the Telehouse side, they are moving our port. This will not affect customers though.
Update
28 Jun 20:46:34

Sadly, we are still seeing very low levels of packetloss on some TalkTalk connected circuits in the evenings. We have raised this with TalkTalk today, they have investigated this afternoon and say: "Our Network team have been running packet captures at Telehouse North and replicated the packet loss. We have raised this into our vendor as a priority and are due an update tomorrow."

We'll keep this post updated.

Update
29 Jun 22:12:17

Update from TalkTalk regarding their investigations today:- Our engineering team have been working through this all day with the Vendor. I have nothing substantial for you just yet, I have been told I will receive a summary of today's events this evening but I expect the update to be largely "still under investigation". Either way I will review and fire an update over as soon as I receive it. Our Vendor are committing to a more meaningful update by midday tomorrow as they continue to work this overnight.

Update
1 Jul 09:39:48
Update from TT: Continued investigation with Juniper, additional PFE checks performed. Currently seeing the drops on both VC stacks at THN and Hex. JTAC have requested additional time to investigate the issue. They suspect they have an idea what the problem is, however they need to go through the data captures from today to confirm that it is a complete match. Actions Juniper - Review logs captured today, check with engineering. Some research time required, Juniper hope to have an update by CoB Monday. Discussions with engineering will be taking place during this time.
Update
2 Jul 21:19:57

Here is an example - the loss is quite small on individual lines, but as we are seeing this sort of loss on many circuits and the same time (evenings) it make this more severe. It's only due to to our constant monitoring that this gets picked up.

Update
3 Jul 21:47:31
Today's update from Talktalk: "JTAC [TT's vendor's support] have isolated the issue to one FPC [(Flexible PIC Concentrator] and now need Juniper Engineering to investigate further... unfortunately Engineering are US-based and have a public holiday which will potentially delay progress... Actions: Juniper - Review information by [TalkTalk] engineering – Review PRs - if this is a match to a known issue or it's new. Some research time required, Juniper hope to have an update by Thursday"
Update
7 Jul 08:41:26
Update from TalkTalk yesterday evening: "Investigations have identified a limitation when running a mix mode VC (EX4200’s and EX4550's), the VC cable runs at 16gbps rather than 32gbps (16gbps each way). This is why we are seeing slower than expected speeds between VC’s. Our engineering team are working with the vendor exploring a number of solutions."
Update
17 Jul 14:29:29

Saturday 15th and Sunday 16th evenings were a fair bit worse than previous evenings. On Saturday and Sunday evening we saw higher levels of packet loss (between 1% and 3% on many lines) and we also saw slow single TCP thread speeds much like we saw in April. We did contact TalkTalk over the weekend and this has been blamed on a faulty card that TalkTalk had on Thursday that was replaced but has caused traffic imbalance on this part of the network.

We expect things to improve but we will be closely monitoring this on Monday evening (17th) and will report back on Tuesday.

Update
22 Jul 20:23:24
TalkTalk are planning network hardware changes relating to this in the early hours of 1st August. Details here: https://aastatus.net/2414
Update
1 Aug 10:42:58
TalkTalk called us shortly after 9am to confirm that they had completed the work in Telehouse successfully. We will move traffic over to Telehouse later today and will be reporting back the outcome on this status post over the following days.
Update
3 Aug 11:23:55
TalkTalk confirmed that they have completed the work in Harbour Exchange successfully. Time will tell if these sets of major work have helped with the problems we've been seeing on the TalkTalk network; we will be reporting back the outcome on this status post early next week.
Update
10 Aug 16:39:30
The packetloss issue has been looking better since TalkTalk completed their work. We are still wanting to monitor this for another week or so before closing this incident.
Update
29 Aug 13:56:53
The service has been working well over the past few weeks. We'll close this incident now.
Started 14 Jun 15:00:00
Closed 29 Aug 13:00:00

14 Aug 09:14:59
Details
Posted: 11 Aug 18:44:38
We're needing to restart the 'e.gormless' LNS - this will cause PPP to drop for customers. Update to follow.
Update
11 Aug 18:46:19
Customer on this LNS should be logging back in - (if not already)
Update
11 Aug 19:00:27
There are still some lines left to log back in, but most are back now
Update
11 Aug 19:10:47
Most customers are back now.
Update
13 Aug 12:12:47
This happened again on Sunday morning, and again a restart was needed. The underlying problem is being investigated.
Resolution We have now identified the cause of the issue that impacted both "careless" and "e.gormless". There is a temporary fix in place now, which we expect to hold, and the permanent fix will be deployed on the next rolling update of LNSs.
Started 11 Aug 18:30:00
Closed 14 Aug 09:14:59

13 Jul 18:00:00
[Broadband] TT blip - Closed
Details
Posted: 13 Jul 11:21:37
We are investigating an issue with some TalkTalk lines that disconnected at 10:51 this morning, most have come back but there are about 20 that are still off line. We are chasing TalkTalk business.
Update
13 Jul 11:23:50
Latest update from TT..... We have just had further reports from other reseller are also experiencing mass amount of circuit drops at the similar time. This is currently being investigated by our NOC team and updates to follow after investigation.
Started 13 Jul 10:51:49 by AAISP Pro Active Monitoring Systems
Closed 13 Jul 18:00:00
Previously expected 13 Jul 15:19:49

19 Jul
Details
Posted: 7 Feb 14:32:32

We are seeing issues with IPv6 on a few VDSL cabinets serving our customers. There is no apparent geographical commonality amongst these, as far as we can tell.

Lines pass IPv4 fine, but only intermittently passing IPv6 TCP/UDP for brief amounts of time, usually 4 or so packets, before breaking. Customers have tried BT modem, Asus modem, and our supplied ZyXEL as a modem and router, no difference on any. We also lent them a FireBrick to do some traffic dumps.

Traffic captures at our end and the customer end show that the IPv6 TCP and UDP packets are leaving us but not reaching the customer. ICMP (eg pings) do work.

The first case was reported to us in August 2016, and it has taken a while to get to this point. Until very recently there was only a single reported case. Now that we have four cases we have a bit more information and are able to look at commonalities between them.

Of these circuits, two are serving customers via TalkTalk and two are serving customers via BT backhaul. So this isn't a "carrier network issue", as far as we can make out. The only thing that we can find that is common is that the cabinets are all ECI. (Actually - one of the BT connected customers has migrated to TalkTalk backhaul (still with us, using the same cabinet and phone line etc) and the IPv6 bug has also moved to the new circuit via TalkTalk as the backhaul provider)

We are working with senior TalkTalk engineers to try to perform a traffic capture at the exchange - at the point the traffic leaves TalkTalk equipment and is passed on to Openreach - this will show if the packets are making it that far and will help in pinning down the point at which packets are being lost. Understandably this requires TalkTalk engineers working out of hours to perform this traffic capture and we're currently waiting for when this will happen.

Update
2 Mar 11:14:48
Packet captures on an affected circuit carried out by TalkTalk have confirmed that this issue most likely lies in the Openreach network. Circuits that we have been made aware of are being pursued with both BT and TalkTalk for Openreach to make further investigations into the issue.
If you believe you may be affected please do contact support.
Update
17 Mar 09:44:00
Having had TalkTalk capture the traffic in the exchange, the next step is to capture traffic at the road-side cabinet. This is being progresses with Openreach and we hope this to happen 'soon'.
Update
29 Mar 09:52:52
We've received an update from BT advising that they have been able to replicate the missing IPv6 packets, this is believed to be a bug which they are pursuing with the vendor.

In the mean time they have also identified a fix which they are working to deploy. We're currently awaiting further details regarding this, and will update this post once further details become known.
Update
18 May 16:30:59
We've been informed that the fix for this issue is currently being tested with Openreach's supplier, but should be released to them on the 25th May. Once released to Openreach, they will then perform internal testing of this before deploying it to their network. We haven't been provided with any estimation of dates for the final deployment of this fix yet.
In the interim, we've had all known affected circuits on TalkTalk backhaul have a linecard swap at the cabinet performed as a workaround, which has restored IPv6 on all TT circuits known to be affected by this issue.
BT have come back to us suggesting that they too have a workaround, so we have requested that it is implemented on all known affected BT circuits to restore IPv6 to the customers known to have this issue on BT backhaul.
Resolution A fix was rolled out on the last week of June, re-testing with impacted customers has showed that IPv6 is functioning correctly on their lines again after Openreach have applied this fix.
Broadband Users Affected 0.05%
Started 7 Feb 09:00:00 by AA Staff
Closed 19 Jul

10 Jul 02:21:59
[Broadband] BT blip - Closed
Details
Posted: 10 Jul 02:16:23
Looks like all lines on BT backhaul blipped at just before 2am. Lines reconnected right away though. Some lines on wrong LNS now so we may move them back - which with show a longer gap in the graphs.
Update
10 Jul 02:22:25
Sessions are all back, and on the right LNS again.
Started 10 Jul 01:59:03
Closed 10 Jul 02:21:59
Previously expected 10 Jul 02:30:00

3 Jun 17:28:30
Details
Posted: 3 Jun 17:06:27
Something definitely not looking right, seems to be intermittent and impacting Internet access.
Update
3 Jun 17:10:34
Looks like a denial of service attack of some sort.
Update
3 Jun 17:17:45
Looks like may be more widespread than just us.
Update
3 Jun 17:23:17
Definitely a denial of service attack, impacted some routers and one of the LNSs. Some graphs lost.
Resolution Target isolated for now.
Started 3 Jun 16:59:05
Closed 3 Jun 17:28:30

8 Jun 10:49:27
Details
Posted: 7 Jun 10:33:00
We are seeing some customers who are still down following a blip within TalkTalk. We currently have no root cause but are investigating.
Update
7 Jun 11:13:21
A small number of lines are still down, however most have now resumed service. We are still communicating with TalkTalk so we can restore service for all affected lines.
Update
7 Jun 11:23:02
Looks like we're seeing another blip affecting many more customers this time. We are still speaking to TalkTalk to determine the cause of this.
Update
7 Jun 11:59:53

TalkTalk have raised an incident with teh following information:

"We have received reports from a number of B2B customers (Wholesale ADSL) who are experiencing a loss of their Broadband services. The impact is believed to approximately 600 lines across 4 or 5 partners. All of the impacted customers would appear to route via Harbour Exchange. Our NOC have completed initial investigations and have passed this to our IP operations team to progress. "

As a result, we'll move TalkTalk traffic away from the Harbour Exchange datacentre to see if it helps. This move will be seamless and will not affect other customers.

Update
7 Jun 12:05:38
Our TalkTalk traffic has now been moved away from HEX89, if There are still a small number of customers offline, if they reboot their router/modem that may force a re-connection and a successful login.
Update
7 Jun 12:37:29
At 12:29 we saw around 80 lines drop, most of these are back online as of 12:37 though. The incident is still open with TalkTalk engineers.
Update
7 Jun 13:19:57
TalkTalk are really not having a good day. We're now seeing packetloss on lines as well as a few more drops. We're going to bring the HEX89 interconnect back up in case that is in any way related, we're also chasing TT on this.
Update
7 Jun 14:37:21
This is still an open incident with TalkTalk, it is affecting other ISPs using TalkTalk as their backhaul. We have chased TalkTalk for an update.
Update
7 Jun 15:37:21

Update from TalkTalk: "Network support have advised that service has been partially restored. Currently Network Support are continuing to re-balance traffic between both LTS’s (HEX & THN). This work is currently being completed manually by our Network support team who ideally need access to RadTools to enable them to balance traffic more efficiently. We are currently however experiencing an outage of RadTools which is being managed under incident 10007687. We will continue to provide updates on the progress as soon as available."

Probably as a result, we are still seeing low levels of packetloss on some TalkTalk lines.

Update
7 Jun 16:49:12
It's looking like the low levels of packetloss stopped at 16:10. Things are looking better.
Update
8 Jun 08:31:43
There are a handful of customers that are still offline, we have sent the list of these circuits to TalkTalk to investigate.
Update
8 Jun 10:26:02

Update from TalkTalk: "We have received reports from a number of B2B customers (Wholesale ADSL & FTTC) who are experiencing authentication issues with their Broadband services. The impact is believed to approximately 100 lines across 2 partners. All of the impacted customers would appear to route via Harbour Exchange. Our NOC have completed initial investigations and have passed this to our Network support team to progress."

We have actaully already taken down our Harbour Exchange interconnect but this has not helped.

Update
8 Jun 10:49:27
Over half of these remaining affected lines logged back in at 2017-06-08 10:38
Update
8 Jun 11:22:39
The remaining customers offline should try rebooting their router/modem and if still not online then please contact Support.
Resolution

From TalkTalk: The root cause of this issue is believed to have been caused by a service request which involved 3 network cards being installed in associated equipment at Harbour exchange. This caused BGP issues on card (10/1). To resolve this Network Support shut down card (10/1) but this did not resolve all issues. This was then raised this to Ericsson who recommended carrying out an XCRP switchover on the LTS. Once the switchover was carried out all subscribers connections dropped on the LTS and the majority switched over to the TeleHouse North LTS. Network support then attempted to rebalance the traffic across both LTS platform however were not able to due to an ongoing system incident impacting Radius Tools. Network support instead added 2 new 10G circuits to the LTS platform to relieve the congestion and resolve any impact. As no further issues have been identified this incident will now be closed and any further RCA investigation will be carried out by problem management.

Regarding the problem with a few circuits not able to establish PPP. the report from TalkTalk is as follows: Network support have advised that they have removed HEX (harbour exchange) from the radius to restore service until a permanent fix can be identified. Network support are liaising with Ericsson in regards to this and investigations are ongoing.

Broadband Users Affected 0.20%
Started 7 Jun 10:05:00
Closed 8 Jun 10:49:27

17 May 12:00:00
Details
Posted: 26 Apr 11:01:07
We have noticed packetloss between 8pm and 10pm on Tuesday (25th April) evening on a small number of TalkTalk connected lines. This may be related to TalkTalk maintenance. We will review this again tomorrow.
Update
26 Apr 16:41:43
We are seeing packet loss this afternoon on some of these lines too. We are contacting TalkTalk.
Update
26 Apr 16:44:23
Update
26 Apr 17:58:06
We have moved TalkTalk traffic over to our Harbour Exchange interconnect to see if this makes a difference or not to the packet loss that we are seeing...
Update
26 Apr 20:50:41
Moving the traffic made no difference. We've had calls with TalkTalk and they have opened an incident and are investigating further.
Update
26 Apr 20:55:14
The pattern that we are seeing relates to which LAC TT are using to send traffic over to us. TT use two LACs at their end, and lines via one have loss whilst lines via the other have no loss.
Update
26 Apr 21:32:30
Another example, showing the loss this evening:

Update
26 Apr 22:26:50
TalkTalk have updated us with: "An issue has been reported that some Partners are currently experiencing quality of service issues, such as slow speed and package (SIC) loss, with their Broadband service. From initial investigations the NOC engineers have identified congestion to the core network connecting to Telehouse North as being the possible cause. This is impacting Partners across the network and not specific to one region, and the impacted volume cannot be determine at present. Preliminary investigations are underway with our NOC and Network Support engineers to determine the root cause of this network incident. At this stage we are unable to issue an ERT until the engineers have completed further diagnostics."
Update
28 Apr 09:43:43
Despite Talktalk thinking they had fixed this we are still seeing packetloss on these circuits between 8pm and 10pm. It's not as much packetloss as we saw on Wednesday evening, but loss nonetheless. This has been reported back to TalkTalk.
Update
4 May 15:42:04
We are now blocking the two affected Talkalk LACs on new connections, eg a PPP re-connect. This means that it will take a bit longer for a line to re-connect (depending upon the broadband router perhaps a minute or two).

This does mean that lines will not be on the LACs which have evening packetloss. We hope not to have to keep this blocking in place for very long as we hope TalkTalk fix this soon.

Update
5 May 16:48:13
We've decided to not block the LACs that are showing packetloss as it was causing connection problems for a few customers. We have had a telephone call with TalkTalk today, and this issue is being escalated with TalkTalk.
Update
5 May 19:47:37
We've had this update from TalkTalk today:

"I have had confirmation from our NOC that following further investigations by our IP Operations team an potential issue has been identified on our LTS (processes running higher than normal). After working with our vendor it has been recommended a card switch-over should resolve this.

This has been scheduled for 16th May. We will post further details next week.

Update
8 May 09:26:21
Planned work has been scheduled for 16th May for this, details and updates of this work is on https://aastatus.net/2386
Update
5 Jun 11:49:24
The planned work took place on the 16th May which appears to have been a success.
Broadband Users Affected 20%
Started 26 Apr 10:00:00
Closed 17 May 12:00:00

2 Jun 16:44:00
Details
Posted: 2 Jun 12:48:45
We have had several customers notify us that they've having connectivity issues in the Cambridge area, all FTTC customers so far, where TCP packets larger than 1300 bytes appear to be dropped. ICMP appears unaffected.
We are currently in the process of reporting this to BT and will post further updates as they become available.
Update
2 Jun 13:08:24
Proactive have raised a fault, AF-AR-OP-3774655 This has also been discovered to affect FTTP customers, which makes sense as they use the same backbone infrastructure as the FTTC customers.
Update
2 Jun 13:35:22
Several customers are reporting that their lines are now performing as expected after restarting their PPP session, so it may be worth restarting your PPP session and letting us know what happens when you try that.
We're still awaiting an update from proactive.
Resolution We have been advised by BT that a link between Milton Keynes and Peterborough was erroring, and was taken out of service to resolve the issue earlier today.
Broadband Users Affected 0.20%
Started 2 Jun 11:47:00
Closed 2 Jun 16:44:00
Cause BT

17 May 12:30:00
Details
Posted: 17 May 09:10:40
We have identified outages within the North London area affecting multiple exchanges. The affected exchanges are listed as: SHOEBURYNESS,THORPE BAY, CANVEY ISLAND, HADLEIGH – ESSEX, MARINE, NORTH BENFLEET, STANFORD LE HOPE, VANGE, WICKFORD, BLOOMSBURY AKA HOWLAND ST, HOLBORN, PRIMROSE HILL, KINGSLAND GREEN, TOTTENHAM, BOWES PARK, PALMERS GREEN, WALTHAM CROSS, WINCHMORE HILL, EDMONTON, NEW SOUTHGATE, EPPING, HAINAULT, ILFORD NORTH, ROMFORD, UPMINSTER, NORTH WEALD, STAMFORD HILL, DAGENHAM, ILFORD CENTRAL, GOODMAYES, STRATFORD, HIGHAMS PARK, LEYTONSTONE, WALTHAMSTOW, CHINGFORD, KENTISH TOWN and MUSWELL HILL. No root cause has been identified. We will update this status page as the updates become available from our supplier.
Update
17 May 09:30:01
Affected Customers went offline at 02:12 this morning. Further information: These exchanges are off line due to 20 tubes of fibre being damaged by the install of a retail advertising board on the A118 Southern [South?] Kings Rd. Notifications from TalkTalk were expecting service to be restored by 9am, but due to the nature of the fibre break it may well take longer to fix.
Update
17 May 10:31:29
Update from TalkTalk: Virgin media have advised that work to restore service is ongoing but due to the extent of the damage this is taking longer than expected. In parallel our [TalkTalk] NOC are investigating if the Total Loss traffic can be re-routed. We will provide a further update as soon as more information is available.
Update
17 May 10:55:56
From TalkTalk: Virgin Media have advised restoration work has completed on the majority of the damaged fibre, our NOC team have also confirmed a number of exchanges are now up and back in service. The exchanges that are now back in service are Muswell Hill, Ingrebourne, Loughton and Bowes Park.
Update
17 May 11:16:28
From TalkTalk: Our NOC team have advised a number of exchanges are now in service. These are Muswell Hill, Bowes Park, Loughton, Ingrebourne, Bowes Park, Chingford, Highams Park, Leytonstone, Stratford and Upton Park.
Update
17 May 11:30:54
That said, we are still seeing lines on exchanges mentioned above as being offline....
Update
17 May 12:11:20
No further updates as yet.
Resolution It looks like most, if not all, of our affected lines are now back online. Update from TalkTalk: Virgin Media have advised 5 of the 8 impacted fibre tubes have been successfully spliced and their engineers are still on site restoring service to the remaining cables
Started 17 May 01:54:00 by AA Staff
Closed 17 May 12:30:00

4 May 18:00:00
Details
Posted: 4 May 16:53:39
Some TT lines blipped at 16:34 and 16:46. It appears that the lines have recovered. We have reported this to TT.
Update
5 May 08:48:41
This was caused by an incident in the TalkTalk network. This is from TalkTalk: "...Network support have advised that the problem was caused by the card failure which has now been taken offline..."
Started 4 May 16:36:27 by AAISP automated checking
Closed 4 May 18:00:00
Cause TT

21 Apr 13:07:49
Details
Posted: 21 Apr 09:59:57
At 9:30 we saw a number of TalkTalk connected circuits drop and reconnect. We're investigating, but seems like a TalkTalk backhaul issue

Update
21 Apr 11:24:54
Most lines logged back in very quickly. A few are still down. It seems TalkTalk had some sort if incident at the Telehouse Harbour Exchange datacentre, we're expecting further information shortly.
Update
21 Apr 12:54:00
TalkTalk have some sort of outage in their Harbour Exchange datacentre. As we have an interconnect in both Telehouse and Harbour Exchange the traffic has moved over to Telehouse. There are a very small number of customers (<10) who are still offline, they may just need a reboot of their router or modem, but we are contacting them individually.
Update
21 Apr 13:08:47
TalkTalk say: Root cause analysis conducted by the NOC and our Network Support team has identified that this incident was caused by a crashed routing card for the LTS. The card was reloaded by our Network Support team and full service has now been restored.
Resolution Any customer still offline, please reboot your router/modem and then contact Support if still off.
Started 21 Apr 09:30:50
Closed 21 Apr 13:07:49

2 Mar 22:10:44
Details
Posted: 2 Mar 21:48:39
Relating to https://aastatus.net/2358 we are undergoing currently in an emergency at-risk period as we perform some tests along side TalkTalk staff. We don't expect any problems, but this work involves re-routing TalkTalk traffic within our network. This work is happening now. Sorry for the no notice.
Update
2 Mar 21:53:05
We have successfully and cleanly moved all TalkTalk traffic off our THN interconnect and on to our HEX Interconnect. (Usually we use both all the time, but for this testing we are forcing traffic through the HEX side)
Update
2 Mar 21:55:52
We're bringing back routing across both links now...
Update
2 Mar 22:03:40
We are now moving traffic to our THN interconnect.
Resolution We're now back to using both the TalkTalk links. Tests completed.
Started 2 Mar 21:46:17
Closed 2 Mar 22:10:44

16 Feb 15:00:00
Details
Posted: 16 Feb 16:00:49
We have spotted some odd latency that was affecting two of our LNSs (A and B gormless). These were also visible, as you would expect, on the graphs shown for people's lines.
Resolution We believe we have addressed the issue now, sorry for any inconvenience.
Started 15 Feb 02:00:00
Closed 16 Feb 15:00:00
Previously expected 16 Feb 15:00:00

13 Feb 10:02:12
Details
Posted: 13 Feb 10:00:36
We just had an LNS blip - this would have caused some customers to drop PPP and reconnect.
Resolution There have been a few LNS blips recently. However, we do know the cause and have a software update to roll out which will fix the problem.
Started 13 Feb 09:56:00
Closed 13 Feb 10:02:12

4 Feb 09:32:03
Details
Posted: 4 Feb 09:14:11
We had an LNS reset and lines will have re-connected for some customers. We're investigating the cause.
Resolution We have found the cause, and expect a permanent fix to be deployed on next round of LNS upgrades.
Broadband Users Affected 12%
Started 4 Feb 09:12:00
Closed 4 Feb 09:32:03

31 Jan 16:29:00
Details
Posted: 31 Jan 16:24:03
Customers on one of our LNSs just lost their connection and would have logged back in again shortly after. We're investigating the cause
Update
31 Jan 16:41:32
Customers are back online. The CQM graphs for the day would have been lost for these lines. We do apologise for the inconvenience this caused.
Broadband Users Affected 12%
Started 31 Jan 16:16:00
Closed 31 Jan 16:29:00

24 Jan 18:15:00
Details
Posted: 24 Jan 16:11:45
Some TalkTalk connected customers have high packetloss on their lines from around 3pm today. These lines are in the Chippenham/Bristol area. If affected you'll be experiencing slow speeds.
Update
24 Jan 16:19:23

Affected lines are looking like this. This shows the fault started just after 9am, but from 3pm there is severe packet loss.

Update
24 Jan 18:32:37
TalkTalk say "NOC & Network engineering are currently investigating congestion and packet loss across the core network." More details to follow.
Update
24 Jan 18:45:58
Problem looks fixed as of 18:15
Update
25 Jan 08:48:01
(This also affected some other circuits in other parts of the country.)
Resolution From TalkTalk: Root cause has not currently been identified.. The (TalkTalk) NOC engaged Network Support, who investigated and added a new link in order to alleviate congestion. The B2B Enterprise team are currently retesting with the affected customers and initial feedback indicates that this has resolved the issue
Broadband Users Affected 1%
Started 24 Jan 15:00:00
Closed 24 Jan 18:15:00

23 Jan 21:50:24
Details
Posted: 23 Jan 21:17:18
Since 20:23 we're seeing ~20% packet loss on TalkTalk connected VDSL circuits, these customers will be experiencing very slow speeds. These are in the SALTERTON/DORCHESTER/WESTBOURNE/CRADDOCK area. We have contacted TalkTalk regarding this.
Update
23 Jan 21:50:48
This looks to have been fixed.
Resolution This was due to a card failure at Yeovil
Started 23 Jan 20:23:00
Closed 23 Jan 21:50:24
Cause TT

24 Jan
Details
Posted: 23 Jan 08:21:07
Sorry to say that the new LNSs (H and I) were not archiving graphs and so the CQM graphs for customers on these LNSs have not been recorded.
Resolution Fixed
Started 16 Jan
Closed 24 Jan
Previously expected 24 Jan

18 Jan 20:30:00
Details
Posted: 18 Jan 20:36:56
We're looking in to why some broadband lines and mobile SIMs dropped and reconnected at around 20:30 this evening....
Resolution Lines are back online, most reconnected within a few minutes. This blip affected about 1/8th of our customers, and was caused by one of our LNS restarting unexpectedly. We do apologise for the inconvenience this caused. We'll be investigating the cause of this.
Started 18 Jan 20:35:58
Closed 18 Jan 20:30:00
Cause LNS restart/crash

17 Jan 09:48:47
Details
Posted: 17 Jan 08:35:28
Once again we are seeing an issue where TT lines are failing to connect. This is not impacting lines that are currently connected unless they drop and reconnect for some reason. This looks like only half of TTs LACs that is impacted, and so lines are eventually reconnecting after several tries. It has been reported to TalkTalk and we will update this post as soon as we get an update.
Update
17 Jan 09:50:18
All affected lines appear to have reconnected.
Resolution We are still investigating the root cause
Broadband Users Affected 1%
Started 17 Jan 01:00:00
Closed 17 Jan 09:48:47
Previously expected 17 Jan 12:31:59