Identified - We're making updates to how our private Sandbox environment handles and directs network traffic as part of ongoing infrastructure improvements. We expect this maintenance to be complete by May 22nd. Your users may lose the ability to interact with your Marqeta private sandbox. If so, please reach out to support@marqeta.com.
May 13, 2026 - 10:04 PDT
Core API Operational
ACH Processing Operational
Payment Processing Operational
Card Issuing Operational
Card Fulfillment Operational
Applications Operational
Credit Operational
Reporting & Analytics Operational
Digital Wallets & Tokenization Operational
Risk & Disputes Operational
Webhooks Operational
Sandbox Under Maintenance
3DS Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
May 13, 2026

Unresolved incident: Private Sandbox Maintenance.

May 12, 2026
Resolved - Following a successful monitoring period, Marqeta systems have maintained stable processing since May 12, 08:40 AM PT with no further anomalies detected. We apologize for any impact this may have had on cardholders and appreciate your patience throughout the resolution process.
May 12, 20:10 PDT
Monitoring - Following up on our previous updates, Pulse has informed us that the responsible acquirer has implemented a change to resolve the root cause, and Marqeta has cross-verified that (STIP) levels returned to normal parameters as of 08:40 AM PT. Our engineering teams will continue to closely observe the stability of the Pulse network to ensure no further disruptions occur for your cardholders, and we will provide a final resolution summary once the monitoring period is successfully complete.
May 12, 09:23 PDT
Update - Our joint technical investigation with the Pulse team has successfully isolated the source of the anomalous data to a single acquiring partner.
Pulse is actively working to engage the responsible acquirer to secure a path for remediation. Concurrently, Marqeta is escalating the issue with utmost priority.
We will provide an update as soon as we have confirmation of the fix and an estimated time to resolution from Pulse.

May 12, 03:20 PDT
Update - To keep our customers updated , we have initiated a joint T.S session with Pulse team, and we are working jointly to investigate the root cause and secure a path for remediation. We will provide an update as soon as we have a confirmed fix or an estimated time to resolution.
May 12, 01:45 PDT
Identified - Marqeta Engineers have observed elevated STIPs on the Pulse network impacting some cardholders utilizing the Pulse network starting on 11th of May at 10:20 PM PDT. A dedicated response team has been assembled and has identified the technical issue as a processing failure within our systems when encountering a new data pattern originating from the Pulse network. To mitigate the impact and confirm the necessary fix, we have engaged the Pulse team for clarity on this new data behavior, and Pulse has raised an incident from their side to investigate. We will provide an update as soon as we have confirmation of the fix and an estimated time to resolution."
May 12, 00:48 PDT
May 11, 2026
Resolved - All of Marqeta's services should now be fully operational.
May 11, 15:57 PDT
Update - Nearly all Marqeta services are now fully operational. Engineers are continuing to mitigate some data discrepancies in certain services. We will provide our next update when the data has been fully remediated, or if we become aware of a substantial change in condition.
May 11, 07:09 PDT
Update - Nearly all Marqeta services are now fully operational. Engineers are continuing to mitigate some data discrepancies in certain services. We will continue monitoring the situation throughout the weekend. We will provide our next update Monday the 11th at 10:00 am EST, or if we become aware of a substantial change in condition.
May 8, 14:46 PDT
Update - We are continuing to observe improvements to Marqeta's services. Engineers are continuing to put mitigations in place, such as moving traffic away from the affected region and availability zone. We will continue monitoring the situation throughout the day for any changes in status. We will provide our next update when we receive additional information from AWS.
May 8, 07:00 PDT
Monitoring - We are continuing to observe improvements to Marqeta's services. Engineers are continuing to put mitigations in place, such as moving traffic away from the affected region and availability zone. We will also continue monitoring the situation overnight for any changes in status. Our next update will be at 10:00 EST.
May 7, 23:53 PDT
Update - AWS is observing some early signs of recovery. Some of Marqeta services remain impacted by the ongoing AWS us-east-1 outage. Our engineers are diligently monitoring the situation and implementing measures to reduce impact where possible. We will provide our next update when we receive additional information from AWS.
May 7, 22:29 PDT
Update - Marqeta services remain impacted by the ongoing AWS us-east-1 outage. Our engineers are diligently monitoring the situation and implementing measures to reduce impact where possible. We will provide our next update when we receive additional information from AWS.
May 7, 21:42 PDT
Update - Marqeta services remain impacted by the ongoing AWS us-east-1 outage. AWS is shifting traffic away from the impacted zone for additional services. Our teams continue monitoring the situation closely and will provide the next update in 30 minutes or when new information is available.
May 7, 21:14 PDT
Update - Marqeta services remain impacted by the ongoing AWS us-east-1 outage. AWS is shifting traffic away from the impacted zone for additional services. Our teams continue monitoring the situation closely and will provide the next update in 30 minutes or when new information is available.
May 7, 20:44 PDT
Update - Marqeta services remain impacted by the ongoing AWS us-east-1 outage. AWS is shifting traffic away from the impacted zone for additional services. Our teams continue monitoring the situation closely and will provide the next update in 30 minutes or when new information is available.
May 7, 20:14 PDT
Update - Marqeta services remain impacted by the ongoing AWS us-east-1 outage. AWS continues working to resolve the thermal issue in the affected availability zone. We are closely monitoring their progress and our service recovery. Next update in 30 minutes or sooner if conditions change.
May 7, 19:44 PDT
Update - Marqeta services remain impacted by the ongoing AWS us-east-1 outage. AWS continues working to resolve increased temperatures in the affected availability zone with no current ETA for full resolution. Some services are beginning to see improvement. Next update in 30 minutes or sooner if conditions change.
May 7, 19:14 PDT
Update - All Marqeta services remain impacted by the ongoing AWS us-east-1 outage with no ETA for resolution from AWS. The next update will be in 30 minutes or sooner if conditions change.
May 7, 18:45 PDT
Identified - We are continuing to monitor the AWS problem in us-east-1, and we are seeing impact across many Marqeta services due to this. We are actively removing traffic from this region to limit the impact to our customers. Next update will be in 30 minutes or when AWS provides any additional context.
May 7, 18:14 PDT
Investigating - Marqeta engineers are detecting an elevated number of STIPs starting from about 4:30PM PST. Teams are monitoring an AWS us-east-1 outage as the likely cause. We will provide updates every 30 minutes or upon discovery of new information.
May 7, 17:46 PDT
May 10, 2026

No incidents reported.

May 9, 2026

No incidents reported.

May 8, 2026
May 7, 2026
Resolved - All replicated assets in the system should now be fully synchronized.
May 7, 17:40 PDT
Update - An external vendor's unexpected cluster maintenance starting from about 4:00 PM PST on May 4th is impacting data replication for transactional data in Diva reports and MQD endpoints. All replicated assets in the system are now fully synchronized with the exception of one cluster, which currently shows approximately 8 hours behind and is progressing toward full synchronization. Marqeta teams continue to monitor the recovery process and maintain engagement with the external vendor. More detailed information will be provided tomorrow morning, or if new information becomes available.
May 5, 14:12 PDT
Update - An external vendor's unexpected cluster maintenance starting from about 4:00 PM PST on May 4th is causing lag in data replication for transactional data in Diva reports and MQD endpoints. Marqeta teams have observed assets continuing to improve. Some corrected reports will be available in the early morning of May 6th. We are continuing our engagement with the external vendor to resolve quicker and receive an ETA for full restoration. Our next update will be in 60 minutes.
May 5, 13:30 PDT
Update - An external vendor's unexpected cluster maintenance starting from about 4:00 PM PST on May 4th is causing lag in data replication for transactional data in Diva reports and MQD endpoints. Marqeta teams have observed assets beginning to catch up. We are continuing our engagement with the external vendor to resolve quicker and receive an ETA for full restoration. Our next update will be in 60 minutes.
May 5, 12:30 PDT
Identified - An external vendor's unexpected cluster maintenance starting from about 4:00 PM PST on May 4th is causing lag in data replication for transactional data in Diva reports and MQD endpoints. Marqeta teams have observed assets beginning to catch up. We are continuing our engagement with the external vendor to resolve quicker and receive an ETA for full restoration. Our next update will be in 60 minutes.
May 5, 11:30 PDT
Update - An external vendor's unexpected cluster maintenance starting from about 4:00 PM PST on May 4th is causing lag in data replication for transactional data in Diva reports and MQD endpoints. Marqeta teams have escalated our engagement with the external vendor to resolve quicker. We will have more information about an ETA to restoration once the external vendor provides their own ETA. Our next update will be in 60 minutes.
May 5, 10:30 PDT
Update - An external vendor's unexpected cluster maintenance starting from about 4:00 PM PST on May 4th is causing lag in data replication for transactional data in Diva reports and MQD endpoints. Marqeta teams have escalated our engagement with the external vendor to resolve quicker. We will have more information about an ETA to restoration once the external vendor provides their own ETA. Our next update will be in 60 minutes.
May 5, 09:32 PDT
Update - An external vendor's unexpected cluster maintenance starting from about 4:00 PM PST on May 4th is causing lag in data replication for transactional data in Diva reports and MQD endpoints. Marqeta teams have escalated our engagement with the external vendor to resolve quicker. We will have more information about an ETA to restoration once the external vendor provides their own ETA. Our next update will be in 60 minutes.
May 5, 08:29 PDT
Update - An external vendor's unexpected cluster maintenance starting from about 4:00 PM PST on May 4th is causing lag in data replication for transactional data in Diva reports and MQD endpoints. Marqeta teams have escalated our engagement with the external vendor to resolve quicker. We will have more information about an ETA to restoration once the external vendor provides their own ETA. Our next update will be in 60 minutes.
May 5, 07:29 PDT
Update - An external vendor's unexpected cluster maintenance starting from about 4:00 PM PST on May 4th is causing lag in data replication for transactional data in Diva reports and MQD endpoints. Marqeta teams have escalated our engagement with the external vendor to resolve quicker. We will have more information about an ETA to restoration once the external vendor provides their own ETA. Our next update will be in 60 minutes.
May 5, 06:29 PDT
Investigating - An external vendor's unexpected cluster maintenance starting from about 4:00 PM PST on May 4th is causing a severe lag in data replication for transactional data in Diva reports and MQD endpoints. Marqeta teams are engaging with the external vendor to resolve the upstream cluster issue, running checks to quantify the data lag load times, and ensuring internal pipelines are ready to automatically process all delayed data once restored. We will have more information about an ETA to restoration once the external vendor provides their own ETA. Our next update will be in 60 minutes.
May 5, 05:29 PDT
May 6, 2026

No incidents reported.

May 5, 2026
May 4, 2026

No incidents reported.

May 3, 2026

No incidents reported.

May 2, 2026

No incidents reported.

May 1, 2026

No incidents reported.

Apr 30, 2026
Resolved - This incident has been resolved. Service has been restored and customers should no longer experience any impact.
Apr 30, 10:34 PDT
Update - Engineers have confirmed that the incident is now stabilized. We will provide a final update once full resolution is confirmed
Apr 29, 15:20 PDT
Update - Work continues toward a full resolution. We will provide the next update when new information becomes available.
Apr 29, 14:15 PDT
Update - Our engineering teams remain focused on implementing the resolution. We will provide another update in approximately 60 minutes or when new information is provided.
Apr 29, 13:08 PDT
Identified - Engineers have identified the root cause and are now working on a resolution. We will provide another update in approximately 60 minutes or when new information becomes available.
Apr 29, 12:04 PDT
Update - Multiple teams are working together to investigate and resolve the 3DS performance degradation. Challenge timeouts and STIPs may continue to impact customers. We will provide another update in approximately 30 minutes.
Apr 29, 11:35 PDT
Update - Our engineering team is actively continuing to investigate the root cause of the 3DS performance issue. Challenge timeouts and STIPs may still be impacting customers. We remain focused on resolution and will provide another update in approximately 30 minutes.
Apr 29, 11:04 PDT
Update - 3DS is experiencing degraded performance that may cause challenge timeouts and STIPs. Our engineering team is investigating the root cause. Next update will be in approximately 30 minutes or when new information becomes available.
Apr 29, 10:32 PDT
Investigating - Starting at 13:00 EST, 3DS is experiencing degraded performance that may cause challenge timeouts. Our engineering team is investigating the root cause. We will provide another update in approximately 30 minutes.
Apr 29, 10:04 PDT
Apr 29, 2026