Any scheduled server maintenance or related issues will be posted here in real time.
View Server Status
THIS IS A SCHEDULED EVENT Mar 24, 11:00 - 14:00 UTC Mar 10, 04:03 UTC
Scheduled - The Cloud IAM AuthZ (which is an authorization system underlying Cloud Manager, API and CLI) will be undergoing maintenance between 11:00 UTC to 14:00 (UTC) on Tuesday, March 24th, 2026. During this window, running Linodes and related services will not be disrupted, but there may be brief periods of increased latency of responses or unavailability of the Linode Cloud Manager, API, and CLI.
Please ensure that you complete critical or important jobs in the Cloud Manager or API before the maintenance window. We will update this status page once this event is complete and Linode customers have full access to all Linode services.
Customers who need assistance from Linode Support during this time may need to call 855-454-6633 (+1-609-380-7100 outside of the United States) to contact our Support team. Please note that our Support team may not be able to assist with issues related to the Cloud Manager or API, authenticate users to their accounts, or respond to Support tickets for brief periods of the maintenance window. As soon as our Support team regains access, we will answer tickets in the order they are received.
Impacts on Current Linode Customers:
For brief periods during the maintenance window, current Linode customers may not be able to log in to the Cloud Manager, interact with the API, or perform any administrative or management functions. This includes actions such as create, remove, boot, migrate, back up, shut down, etc across all Linode services. While we expect the impact period to be approximately 10 minutes cumulatively across the maintenance window, a longer impact period is possible.
This maintenance will impact the Kubernetes API. Dynamic aspects of LKE that rely on the Linode API will also be impacted, including autoscaling, recycling, rebooting, attaching/detaching PVCs, NodeBalancer provisioning, as well as the ability to create new clusters. Cluster nodes and running workloads will not be affected.
Impacts on Users Trying to Create Linode Accounts / Awaiting Account Authentication:
The Linode Cloud Manager may be offline for brief periods during the maintenance period, we are unable to accept requests for new accounts or authenticate accounts for users awaiting full account access.
Thank you for your patience and understanding.
Mar 9, 21:40 UTC Mar 9, 19:46 UTC Mar 9, 18:39 UTC
Monitoring - At this time we have been able to correct the issues affecting connectivity in our IAD (Washington, DC) data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
Update - Our team has identified the issue affecting connectivity in our IAD (Washington, DC) data center. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Investigating - Our team is investigating an emerging service issue affecting compute hosts in IAD (Washington, DC). We will share additional updates as we have more information.
Mar 9, 15:18 UTC Mar 9, 06:31 UTC Mar 9, 05:48 UTC Mar 9, 04:04 UTC
Resolved - Our team has identified that the issue affecting the Cloud Manager and API is related to a previously communicated incident from March 4, 2026 (https://status.linode.com/incidents/yzlp8ykymmhm). Today's issue was caused by the backlog of notifications that accumulated due to the previous incident, and it has cleared now. We haven't observed any additional issues with the Cloud Manager or API, and will now consider this incident resolved. If you continue to experience issues, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com for assistance.
Update - We are continuing to investigate this issue. As a workaround, customers can login to https://cloud.linode.com/ and manually check the notifications until the issue is mitigated. Subsequent updates will be posted as progress is made.
Update - We are continuing to investigate this issue. The appropriate subject matter experts are engaged. Subsequent updates will be posted as progress is made.
Investigating - Our team is investigating an emerging issue where some customers are receiving delayed event notification or notification emails for activities performed on Cloud Manager. We will share additional updates as we have more information.
Mar 7, 21:08 UTC Mar 7, 18:14 UTC Mar 7, 17:07 UTC Mar 7, 16:08 UTC Mar 7, 15:31 UTC Mar 7, 14:31 UTC Mar 7, 13:35 UTC Mar 7, 12:34 UTC Mar 7, 12:02 UTC
Resolved - We haven’t observed any additional issues with the Object Storage service, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
Monitoring - At this time we have been able to correct the issues affecting the Object Storage service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.
Update - We are still working to implement the fix for the Object Storage service issue. We will share another update as soon as progress is made.
Identified - Our team has identified the issue affecting the Object Storage service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Update - Our team is continuing to investigate the Object Storage API issue, which affects all Object Storage regions. This issue is limited to interacting with the Object Storage service, such as managing buckets, access keys, or Object Storage policies. The underlying Object Storage service itself remains operational. We appreciate your patience and will provide further updates as soon as possible.
Update - We are continuing to investigate this issue. Thank you for your patience as we work toward a resolution.
Update - We are actively investigating an issue affecting the Object Storage service. Users may experience connection timeouts and errors when accessing this service. We will provide updates as we learn more and work toward a resolution.
Update - We are continuing to investigate this issue.
Investigating - This issue is impacting Object Storage access globally. During this time customers may encounter issues with managing buckets, access keys, or Object Storage policies. Our team is continuing to investigate.
Mar 5, 18:11 UTC Mar 5, 17:01 UTC Mar 5, 16:23 UTC Mar 5, 14:48 UTC Mar 5, 10:50 UTC Mar 5, 07:34 UTC Mar 5, 06:55 UTC Mar 5, 05:48 UTC Mar 5, 02:07 UTC
Resolved - We haven’t observed any additional issues with the service, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
Monitoring - At this time we have been able to correct the issues affecting the service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.
Identified - Our team has identified the issue affecting the service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Update - We are continuing to investigate and will provide the next update as progress is made.
Investigating - We are aware of a recurrence of this issue across multiple regions. We are continuing to investigate and will provide the next update as progress is made.
Monitoring - Our team has identified the issue affecting the service and implemented a fix. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.
Update - We are continuing to investigate the issue. We will provide the next update as progress is made.
Update - Our subject matter experts are actively investigating the issue. We will provide the next update as progress is made.
Investigating - We are investigating a critical service issue affecting NVIDIA RTX 4000 Ada GPU nodes across multiple regions, including Osaka (osa1), Seattle (sea1), and Chicago (ord1).
Affected GPU nodes may report an unrecoverable error state leading to failures in Vulkan initialization and GPU-accelerated workloads. Additionally, some LKE clusters in the Osaka region are currently experiencing Control Plane connectivity issues, resulting in timed-out API requests and errors.
Our engineering teams are currently investigating the root cause, focusing on a potential regression in the underlying host hypervisor or GPU firmware. We will provide more information as it becomes available