Avoid nested set update on descriptions CSV import (reduce DB deadlocks)
|Target version:||Release 2.6.0|
|Google Code Legacy ID:||Tested version:||2.5, 2.6|
In some cases, running a descriptions CSV import (either from the GUI with the worker or from the CLI) results in a database deadlock error, not finishing the import and leaving a damaged database.
SQLSTATE: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction
While the CSV import is not executed within a transaction per se, it performs multiple nested set updates and those operations are similar to transactions as they create locks on the scanned records (more info). Depending on the amount of descriptions and where the import is targeted in the tree, this locks may take a long time and affect multiple rows. When the lock is active, any update to the affected rows like creating/deleting a description under a blocked one will cause the described error. It could happen in one process or the other depending on which one gets the lock first.
The most direct way I found to reproduce the issue was running two CSV import tasks targeting the same description:
The CSV contained a few hundred rows and the target description was located around the middle of a 40K descriptions tree.
- We recently addressed the deadlock issue for web requests with retries (#12709)
- We're avoiding the nested set updates on hierarchy deletions until the end of the process (#13211 and #13239)
- A different transaction isolation level may improve the situation, although I don't think so (#13360)
- Removing the indexes for the lft and rgt columns may improve the nested set updates speed (#13359)
- Fully removing the nested set could be the closest to a final solution in this particular case (#13240)
The CSV import for descriptions already has an option to disable the nested set update entirely (
--skip-nested-set-build), which is in there because nested set updates can take a long time compared with the build nested set task. So, apart from the performance improvements removing the individual nested set updates and including the nested set build within the CSV import task at the end, we will reduce the likelihood of deadlocks and, if something goes wrong, the situation won't be worst than it is right now.
#3 Updated by José Raddaoui Marín over 1 year ago
- Status changed from Code Review to QA/Review
- Assignee deleted (
José Raddaoui Marín)
Merged in qa/2.6.x.
Now, the nested set update is always disabled in a per row basis and, if the
--skip-nested-set-build option is not used, it will be fully rebuilt right before the end of the task. To be tested alongside #13355 and #13352.
I'll follow up in this ticket with more details about the deadlock and performance improvements in the IOs CSV import after this changes.
#4 Updated by Dan Gillean over 1 year ago
- Status changed from QA/Review to Verified
- Requires documentation set to No
Wow, this is actually pretty amazing - i'm not surprised there were previously errors, but being able to run two giant imports, each targeting a descendant in a large hierarchy, is impressive!