Recent posts
Timeline view of events on our network and systems

Events from the AAISP network from the last few months on a scrollable timeline. Mouseover for brief details, click incident to view the full post.

MAINTENANCE Planned BT
AFFECTING
BT
STARTING
May 07, 12:01 AM (11½ days )
DESCRIPTION
BT have planned work on their side of one of our hostlinks, on 7th May between midnight and 6AM. We will move traffic away from this hostlink beforehand so as to minimise the impact on customers. We don't expect this to impact customers.

MINOR Closed AA Services
AFFECTED
AA Services
STARTED
Apr 23, 05:30 PM (1½ days ago)
CLOSED
Apr 23, 08:49 PM (1½ days ago)
DESCRIPTION
There is a routing problem affecting access to some of our services, eg our website and L2TP service among others. We're investigating.
Resolution: This was caused by a third party internet provider, with whom we have been in talks with about them providing us some transit and had provisionally configured some of their routers to allow us to announce our IP blocks through them. We had not got to the point of actually setting up the service though. However, one of their routers malfunctioned and got in a state where it was re-announcing our IP blocks to some of the internet which meant some of the internet was sending traffic bound for us to them. We mitigated some of the problems by announcing more specific routes and also got in touch with the provider who promptly fixed their router.

MINOR Closed VoIP
AFFECTED
VoIP
STARTED
Apr 22, 12:14 PM (2¾ days ago)
CLOSED
Apr 22, 12:30 PM (2¾ days ago)
DESCRIPTION
Some customers are having problems with registering their VoIP phone. Investigations are underway. This will cause problems for some customers with receiving and making calls.
Resolution: There was a problem with us storing the port for some SIP registrations between 11:14 and 12:30 which was causing some registrations to fail.

MAINTENANCE Open TalkTalk
AFFECTING
TalkTalk
STARTED
Apr 21, 09:00 PM (3½ days ago)
DESCRIPTION

We have multiple interlinks to TalkTalk that carry our broadband traffic. TalkTalk have scheduled planned work on both these links during a four week period from Tuesday 23rd April until 16th May. (Specifically midnight to 6AM on 23rd, 25th, 25th April and 1st, 2nd, 9th, 16th May.

Due to the work being carried out (Software updates of their "LTSs") we are unable to move traffic seamlessly between our interlinks and so TalkTalk customers will see their connections drop and reconnect on these early mornings.


MAINTENANCE Completed CityFibre
AFFECTING
CityFibre
STARTED
Apr 20, 04:00 AM (5¼ days ago)
CLOSED
Apr 22, 07:25 AM (3 days ago)
DESCRIPTION
We'll be performing some work on one of our routers for CityFibre connections. Some CityFibre customers will see their connection drop and reconnect moments after at 4AM on Saturday morning.
Resolution: This work has been completed.

MAINTENANCE Completed Ethernet BGP
AFFECTING
Ethernet BGP
STARTED
Apr 15, 10:30 PM (9¼ days ago)
CLOSED
Apr 17, 09:30 PM (7½ days ago)
DESCRIPTION
We're performing some maintenance on the primary router used for our Ethernet (Etherway/Etherflow) services and our customer BGP sessions (eg customers with their own BGP sessions to us) - "A.Weightless". We have moved traffic on to our secondary router for the next 48 hours. This move is seamless and customer traffic is not affected.
Resolution: This has been completed.

MINOR Closed Servers
AFFECTED
Servers
STARTED
Apr 15, 01:36 PM (9¾ days ago)
CLOSED
Apr 15, 03:07 PM (9¾ days ago)
DESCRIPTION
At around 13:30 one of our servers had a disk problem, and it needs to be rebooted and fixed. This server is hypervisor that runs some of our core services. As we run many redundant and spare servers which fail over to other servers when a problem occurs the customer impact is minimal.
Resolution:

MINOR Closed DATA SIMs
AFFECTED
DATA SIMs
STARTED
Apr 11, 12:15 PM (13¾ days ago)
CLOSED
Apr 11, 01:30 PM (13¾ days ago)
DESCRIPTION
We've seen some Data SIMs drop and reconnect from 12:15 today - we suspect caused by something upstream, probably in the mobile network.
Resolution:

MAINTENANCE Completed L2TP
AFFECTING
L2TP
STARTED
Apr 10, 04:00 AM (15¼ days ago)
CLOSED
Apr 11, 04:10 AM (14¼ days ago)
DESCRIPTION
We will be replacing the hardware of our main L2TP router during the day on Wednesday 10th April. As part of this work we will be moving L2TP customers over to the backup L2TP server shortly after 4AM on 10th April. This was cause customers to drop and reconnect.
Resolution: This has been completed.

MINOR Closed L2TP
AFFECTED
L2TP
STARTED
Apr 09, 02:00 PM (15¾ days ago)
CLOSED
Apr 09, 02:07 PM (15¾ days ago)
DESCRIPTION
At 2pm L2TP customers experienced a drop and reconnect of their service.
Resolution: Hardware replacement underway: https://aastatus.net/42656

MAINTENANCE Completed LNS
AFFECTING
LNS
STARTED
Apr 09, 04:00 AM (16¼ days ago)
CLOSED
Apr 09, 10:00 AM (16 days ago)
DESCRIPTION
We've had a few customers on the S.Gormless LNS report slow speeds and moving them on to different LNSs has helped. In light o no obvious reason for this, we will reboot the LNS at 4AM on Tuesday 9th April. The small number of customers on this LNS will experience a drop and reconnect of their service.
Resolution: This has been completed.

MAINTENANCE Completed TalkTalk
AFFECTING
TalkTalk
STARTED
Apr 09, 03:00 AM (16¼ days ago)
CLOSED
Apr 11, 07:01 PM (13½ days ago)
DESCRIPTION

We have multiple interlinks to TalkTalk that carry our broadband traffic. TalkTalk have scheduled planned work on our links in our Equinix LD8 datacentre for 11th April between 1AM and 6AM.

So as to minimise the impact on our customers, we will move traffic off these links on 9th April at 3AM. This should be seamless, but there is a risk of some customers having a brief interruption to their service.


Resolution: This work has been completed with no customer impact.

MAINTENANCE Open SMS
AFFECTING
SMS
STARTED
Apr 08, 02:13 PM (16¾ days ago)
DESCRIPTION
This work has started, but we did not do a planned work as expected it to be seamless. Sadly that was not quite the case today, so this is more detail on what we are planning over the next few weeks. The main thing is, any problems, please tell us right away.
  • Some cosmetic improvements (nicer format phone numbers) in emailed or tooted SMS (done)
  • Additional options (such as forcing the email/toots to be E.123 + format numbers) (done)
  • Additional options for posting JSON to http/https (TODO)
  • Allowing SMS to be relayed (chargeable) to other numbers (done)
  • We already allow multiple targets for a number for SMS (done)
  • Some improvements for 8 bit SMS, which are rare, as we previously treated as latin1, which is not correct (TODO)
  • Some new features for trialling a new SIP2SIM platform (TODO)
  • Improve "visible" format for content in email/toot when special characters are used (e.g. NULL as ␀) (TODO)
The 8 bit data format changes are likely to be the least "backwards compatible" changes, but should not impact anyone as they are not generally encountered. I.e. incoming SMS will rarely (if ever) be 8 bit coded, and when they were, we would get special characters wrong. Similarly, sending 8 bit SMS would only show the expected characters on some older phones, and would be wrong on many others as the specification does not say the character set to use. We will, however, handle NULLs much better, which are relevant for some special use cases.

MINOR Closed SMS
AFFECTED
SMS
STARTED
Apr 08, 09:15 AM (17 days ago)
CLOSED
Apr 08, 11:21 AM (16¾ days ago)
DESCRIPTION
SMS delivery via HTTP POST was broken via one of our SMS relays for a while this morning. The symptom was that "da", the destination address, was being posted as the "target" rather than the destination number. This means if we post to your server on https://example.com/sms/, we could have posted the SMS with the destination number of literally "https://example.com/sms/". This would have broken anything depending on the "da" to make decisions on what to do with the message. This is fixed now, and the problem occurred between around 9:15 and 11:21. Apologies for any inconvenience.
Resolution:

MAINTENANCE Completed LNS
AFFECTING
LNS
STARTED
Apr 08, 01:00 AM (17¼ days ago)
CLOSED
Apr 08, 08:08 AM (17 days ago)
DESCRIPTION
We'll be moving customers off the B.Gormless and G.Gormless LNS during the early hours of Monday 8th April. These customers will see their line drop and reconnect from 1AM.
Resolution: This has been completed.

MINOR Closed DNS, Email and Web Hosting
AFFECTED
DNS, Email and Web Hosting
STARTED
Apr 03, 10:54 AM (21¾ days ago)
CLOSED
Apr 04, 10:54 AM (20¾ days ago)
DESCRIPTION
Our DoH/DoT resolvers ( https://support.aa.net.uk/DoH_and_DoT ) were intermittently failing DNS lookups. It seemed to start over the Easter weekend. Our DoT/DoH front ends are DNS aware proxies (dnsdist) to back ends running unbound. dnsdist uses TLS to speak DNS to the back ends. Some of the back ends had failed to reload their TLS certificates after renewal, so although the certificates were valid unbound was still serving old certs and they eventually expired. This resulted in broken back ends in the pool, which dnsdist kept trying to bring back into service. The intermittent nature of the failures meant that it wasn't obvious to users, as clients generally retry silently in the background. Of course our monitoring should have caught this! We've fixed the underlying problem which caused unbound not to pick up the renewed certificates, and we've improved monitoring to catch similar problems should they occur in future.
Resolution:

MAINTENANCE Completed Easter
AFFECTING
Easter
STARTED
Mar 28, 10:00 AM (27¾ days ago)
CLOSED
Apr 02, 09:00 AM (23 days ago)
DESCRIPTION
We are closed on both Bank Holiday Friday and Bank holiday Monday. We're open 10AM-2PM on Saturday, as usual, for technical support.
Resolution:

MAINTENANCE Completed CityFibre
AFFECTING
CityFibre
STARTED
Mar 27, 12:01 AM (29¼ days ago)
CLOSED
Mar 27, 09:05 AM (29 days ago)
DESCRIPTION

CityFibre are carrying out work that will affect CityFibre connections in Maidenhead, Luton, Leicester, Kettering, Gloucester, Coventry, Glasgow, Bournemouth, Milton Keynes, Newcastle Upon Tyne, Northampton, Norwich, Peterborough, Plymouth, Poole, Reading, Rugby, Solihull, Swindon, Wakefield and Wolverhampton.

Customers may experience a momentary loss of service ranging from a couple of seconds up to a maximum of 30 seconds several times during the maintenance window.


Resolution: We assume this work was carried out, we didn't see any customers affected by this.

MAJOR Closed BT Circuits
AFFECTED
BT Circuits
STARTED
Mar 26, 01:13 AM (30¼ days ago)
CLOSED
Mar 26, 03:30 AM (30 days ago)
DESCRIPTION
We're investigating the cause of major stability issues. Update to follow ASAP.
Resolution:

The cause of this disruption was BT planned work to carry out 'invasive testing' on our links. They have confirmed that the work has been completed.

They failed to inform us of this. We already have a formal complaint regarding previous lack of notifications, and BT have since been sending us notification of works (eg the one for 27th March) manually to us. This is being followed up with our account manager.

We do apologise to our customers who were affected by this.

We're furious.

We have had further information from BT about their work. The work was on a transmission link between two datacentres, and as part of that all ports on devices that use the link also have their ports disabled and enabled. As a result we saw one port on each pair of our host links go down and up around 15 times each - at the same time. As this was not cleanly shutdown by BT it caused traffic to break and customers to drop and reconnect multiple times between midnight and 3:30AM.


MAINTENANCE Completed BT
AFFECTING
BT
STARTED
Mar 25, 03:00 AM (1 month ago)
CLOSED
Mar 27, 09:03 AM (29 days ago)
DESCRIPTION

We have multiple interlinks to BT that carry our broadband traffic. BT have scheduled planned work on our links in our Harbour Exchange Square datacentre for 27th March between midnight and 6AM.

So as to minimise the impact on our customers, we will move traffic off these links on 25th March at 3AM. This should be seamless, but last time we attempted this BT had a misconfiguration which caused some customers to drop their connection!


Resolution: Unfortunately BT's migration didn't go to plan and they rolled back their change. No customer circuits were affected. The work will be rescheduled for a later date.

MAINTENANCE Completed LNS and Routers
AFFECTING
LNS and Routers
STARTED
Mar 23, 03:00 AM (1 month ago)
CLOSED
Apr 07, 12:15 PM (17¾ days ago)
DESCRIPTION

We will be performing software upgrades on our FB9000 LNSs during the early hours of Saturday 23rd, Sunday 24th and Monday 25th this week. This will cause customer lines to drop and reconnect a couple of times between the hours of 3AM and 4:30AM.

Customer who will be affected by this are those with line speeds of 80Mb/s and above.

The software upgrade being applied does have a plausible fix for the CPU hang that we have been seeing. However, if we we see any further CPU hangs we will revert back to the seemingly stable version of the software.


Resolution: We have seen some CPU hangs with the latest software, so will be reverting back to the more stable 'Factory' version.

MAINTENANCE Assumed Completed Broadband
AFFECTING
Broadband
STARTED
Jan 19, 03:50 PM (3 months ago)
DESCRIPTION

This is a summary and update regarding the problems we've been having with our network, causing line drops for some customers, interrupting their Internet connections for a few minutes at a time. It carries on from the earlier, now out of date, post: https://aastatus.net/42577

We are not only an Internet Service Provider.

We also design and build our own routers under the FireBrick brand. This equipment is what we predominantly use in our own network to provide Internet services to customers. These routers are installed between our wholesale carriers (e.g. BT, CityFibre and TalkTalk) and the A&A core IP network. The type of router is called an "LNS", which stands for L2TP Network Server.

FireBricks are also deployed elsewhere in the core; providing our L2TP and Ethernet services, as well as facing the rest of the Internet as BGP routers to multiple Transit feeds, Internet Exchanges and CDNs.

Throughout the entire existence of A&A as an ISP, we have been running various models of FireBrick in our network.

Our newest model is the FB9000. We have been running a mix of prototype, pre-production and production variants of the FB9000 within our network since early 2022.

As can sometimes happen with a new product, at a certain point we started to experience some strange behaviour; essentially the hardware would lock-up and "watchdog" (and reboot) unpredictably.

Compared to a software 'crash' a hardware lock-up is very hard to diagnose, as little information is obtainable when this happens. If the FireBrick software ever crashes, a 'core dump' is posted with specific information about where the software problem happened. This makes it a lot easier to find and fix.

After intensive work by our developers, the cause was identified as (unexpectedly) something to do with the NVMe socket on the motherboard. At design time, we had included an NVME socket connected to the PCIE pins on the CPU, for undecided possible future uses. We did not populate the NVMe socket, though. The hanging issue completely cleared up once an NVMe was installed even though it was not used for anything at all.

As a second approach, the software was then modified to force the PCIe to be switched off such that we would not need to install NVMes in all the units.

This certainly did solve the problem in our test rig (which is multiple FB9000s, PCs to generate traffic, switches etc). For several weeks FireBricks which had formerly been hanging often in "artificially worsened" test conditions, literally stopped hanging altogether, becoming extremely stable.

So, we thought the problem was resolved. And, indeed, in our test rig we still have not seen a hang. Not even once, across multiple FB9000s.

However...

We did then start seeing hangs in our Live prototype units in production (causing dropouts to our broadband customers).

At the same time, the FB9000s we have elsewhere in our network, not running as LNS routers, are stable.

We are still working on pinpointing the cause of this, which we think is highly likely to be related to the original (now, solved) problem.

Further work...

Over the next 1-2 weeks we will be installing several extra FB9000 LNS routers. We are installing these with additional low-level monitoring capabilities in the form of JTAG connections from the main PCB so that in the event of a hardware lock-up we can directly gather more information.

The enlarged pool of LNSs will also reduce the number of customers affected if there is a lock-up of one LNS.

We obviously do apologise for the blips customers have been seeing. We do take this very seriously, and are not happy when customers are inconvenienced.

We can imagine some customers might also be wondering why we bother to make our own routers, and not just do what almost all other ISPs do, and simply buy them from a major manufacturer. This is a fair question. At times like this, it is a question we ask ourselves!

Ultimately, we do still firmly believe the benefits of having the FireBrick technology under our complete control outweigh the disadvantages. CQM graphs are still almost unique to us, and these would simply not be possible without FireBrick. There have also been numerous individual cases where our direct control over the firmware has enabled us to implement individual improvements and changes that have benefitted one or many customers.

Many times over the years we have been able to diagnose problems with our carrier partners, which they themselves could not see or investigate. This level of monitoring is facilitated by having FireBricks.

But in order to have finished FireBricks, we have to develop them. And development involves testing, and testing can sometimes reveal problems, which then affect customers.

We do not feel we were irrationally premature in introducing prototype FireBricks into our network, having had them under test not routing live customer traffic for an appropriate period beforehand.

But some problems can only reveal themselves once a "real world" level and nature of traffic is being passed. This is unavoidable, and whilst we do try hard to minimise disruption, we still feel the long term benefits of having FireBricks more-than offset the short term problems in late stage of development. We hope our detailed view on this is informative, and even persuasive.