I think I'm a bit confused as to what is bad and good in this case: concurrency can cause issues (so concurrency is bad), and we use locking to help (so locking is good)? Also what do we mean that "Serializability" improves concurrency without sacrificing isolation? I was not able to copy the locking folder from the reading into my directory. I got the error that the folder does not exist. Does putting a lock/unlock around multiple statements turn them into one atomic transaction? If I write-lock all the resources I am using, perform ~500 operation steps, and then commit and unlock all of them, would all of this be an atomic transaction? Can the rollback command affect committed transactions? Why can a transaction read an uncommitted write from an incomplete transaction when S2PL is not used? I thought values were local until they were committed? If uncommitted values can be accessed by dirty reads, then what does “commit” do? I'm a little bit confused as to how transactions and locking are actually different/how these differences are achieved. Are there any issues that arise from transactions running in the before world? I'm not sure if I understand the purposes of locking and isolation levels. To prevent phantom tuples, can you just insert and lock the new tuple atomically? Can you prevent deadlock by having the operating system assign a priority value to each transaction (starting low for new transactions), detect deadlocks, and increment the priority of any transaction that is forced to reset to end a deadlock, so no transaction is starved for too long? Because the operating system, and not a transaction itself, is deciding whether a transaction with abort and try again, this should also prevent livelock?