top of page
Love Locks

Project Proposal

This is where we will provide the summary, background, challenges, resources, goals, deliverables, and schedule for the locks project!

Project Proposal: About

Summary

Using micro-benchmarks to compare and contrast the consequences of different locks, we will be rigorously testing a set of locks (namely test and set, test and test and set, queue, ticket, etc.) and observing some metrics like their latency, interconnect traffic, scalability, storage cost and fairness and measuring how each compares. The micro-benchmarks will be very simple so that the focus remains on the locks.

Project Proposal: Text

Background

We will use simple micro-benchmarks of code that test various types of lock implementations. An example of a simple micro-benchmark is:
lock(&is_lock);
x++;
unlock(&is_lock);
Locks are invaluable for parallelism so that threads may run in parallel while avoiding any race conditions that may occur so that we guarantee the correctness of execution. Also at the same time we need to avoid any possible deadlocks and try to minimize starvation of threads due to locks.

Project Proposal: Text

The Challenge

This problem is challenging because while we test parallelism we need to ensure correctness in lock placement. While we want to test locks and compare the results of certain locks with others, we need to be consistent in our tests (such that we are doing similar micro-benchmarks) and tools we use for profiling. Examining these differences, we consider why we see the differences that we do and attribute it to certain qualities about one lock over another. Another challenge is there are cases where the lock is obtained by a thread but the thread stays idle. Are there ways to regain the lock from this thread? Also trying to implement locks in assembly is something that we are going to learn as we have only used functions from libraries to implement locks until now.

Project Proposal: Text

Resources

We will be running our tests on psc and on ghc. We will create several different micro-benchmarks from scratch. We also plan to use tools like perf for observing cache misses for these micro-benchmarks. We will be referencing our class slides as well as looking at online research papers. 
We haven't found any papers yet but hope to find them soon. We will also like to have guidance on useful research papers and any specific places to look for assembly implementations of locks which may be helpful to us. Also, we would like to have guidance on what other kinds of locks there are (apart from the ones in slides) and what they should hypothetically behave like.

Project Proposal: Text

Goals and Deliverables

Our goal is to demonstrate the pros and cons of some locks over others. We would like to observe and measure the metrics like latency, cache misses, scalability, storage cost and fairness for different lock implementations. After writing micro benchmarks, if time permits we may test the locks on bigger workloads also. By bigger workloads we mean that when the critical section gets significantly bigger (more lines of code) than our micro-benchmarks. We hope to see that the locks have a lot of differences, and there is a benefit to using some over others depending on the circumstance. In the same way if we are lagging behind in schedule then we may reduce the number of lock implementations or number of tests accordingly.
We will show different micro-benchmarks with lock implementations measuring different factors to see which locks are more ideal than the other in a certain situation. This will likely mean that we will have small sections of the poster representing those micro-benchmarks, where ideally we can have a graph/table comparison of the behaviour of different locks.
These goals are tentative but maybe subject to changes based on instructor’s feedback or when a new idea pops up as we make progress in our project. 
We will show different micro-benches with lock implementations tested measuring different factors to see which locks are more ideal than the other in a certain situation. This will likely mean that we will have small sections of the poster representing those micro-benches, where ideally we can have a graph/table comparison of the behavior of different locks.
We plan to answer questions like, will lock “x” cause starvation, will lock “y” cause a lot of interconnect traffic, etc. It’s a thorough analysis which provides the pros and cons of different lock implementations.

Project Proposal: Text

Platform Choice

We have decided to write code in assembly as we can directly plug in the assembly instructions related to locks. We will use pthreads for spawning threads and that can really use any system, so we consider using ghc for consistency with the course, and also psc to take advantage of its high core count where we can run many more threads at once to start seeing contention, overhead, fairness, etc. all really tested.

Project Proposal: Text

Schedule

11/1-12/10

10/31 - 11/6

Week 1

Figure out which micro-benchmarks we want to use, and which locks we would like to test. Begin writing the micro-benchmarks.

Locks have been decided:

Mutex, test-and-set, test-and-test-and-set, test-and-test-and-set with backoff, queue lock, array lock, and ticket lock.

11/7 - 11/13

Week 2

Microbenchmark 1 for lock implementations (Testing and Analysis)
MB1 will be of a very simple arithmetic operation (incrementation)

11/14 - 11/20

Week 3

Microbenchmark 2 for lock implementations (Testing and Analysis)
MB2 will be of a very simple memory access

11/21 - 11/27

Week 4

11/22 Milestone Report due at 9AM
Planning and prep for lock stealing implementation, getting started
MCS locks (mcs, mcs-tp)

11/28 - 12/04

Week 5

Complete a working stealing ticket lock implementation that uses timestamps to determine whether a thread loses rights to its lock
CLH locks (clh, clh-try)
28 Nov - lock stealing
1-4 Dec - more microbenchmarks and measurements (matrix multiplication, critical vs non-critical sections, tests 1-5)

12/05-12/10

Week 6

Making comparisons and compiling the results. Preparing for the poster session. Implementing more locks or more tests if we have more time.
12/09 Final Report due
12/10 Poster Session
8 Dec - completion of report, summation of analysis

Project Proposal: Schedule
bottom of page