fff8 Concurrent programming - Telecomix Crypto Munitions Bureau

Concurrent programming

From Telecomix Crypto Munitions Bureau

Jump to: navigation, search

These are notes for Concurrent Programming, TDA381, Chalmers.

Contents

[edit] Words

[edit] (Weak) Fairness

Weak fairness is that if a statement is continually available for execution, it will eventually execute.

"A scenario is (weakly) fair if at any state in the scenario, a statement that is continually enabled eventually appears in the scenario."

[edit] Other types of fairness

This is included for the sake of completeness of definition of fairness. Not part of exam AFAIK.

A process gets its turn...

  • always (unconditional fairness)
  • if it is enabled infinitely often (strong fairness)
  • if it is continuously enabled from some point on (weak fairness)

Teacher footnote: "But you should be aware that there is a whole forest of fairness definitions, and all kinds of related topologies, etc. I'm not sure how much of all that makes any sense in practical languages. The above three are a good start."

[edit] Scenario

A scenario is an example of how a group of processes executes their statements.

  • Example: Can be used to show how a process never is allowed to execute its statement (non-fairness)

[edit] Interleaving

Processes statements are picked for execution.

  • uniprocessor systems: via context switches
    • thousands of statements from process P, then thousands from process Q, etc
  • multiprocessor systems: each processor executes statements from their processes. Individual statements from multiple processes may be interleaved arbitrary
  • It is safest to never assume anything about how many statements each process will do until interleaving takes place. Always assume that the next statement may be from another process.

[edit] Context switch

Meaning: The operating systems scheduler switches to another process. May also be performed internally by a virtual machines scheduler, such as in erlang and Java.

[edit] Atomic operation

An operation that executes without any interleaving. The whole operation is executed "at once", as a single step.

[edit] Atomic hardware operations

  • Store and load from memory are atomic (Dekkers algorithm uses this)
  • Fetch-and-add
  • Test-and-set
  • compare-and-swap

[edit] Atomic high-level operations

  • Examples: Semaphores, monitors
  • High level atomic operations will make the code run slowly, sometimes.

[edit] Test-and-set

Is an atomic operation that can be implemented in hardware. Usually as a mnemonic for the assembler programmer.

Pseudocode description:

test-and-set(common, local)
   local <- common
   common <- 1
   return local

Not as powerful as compare-and-swap, or fetch-and-add. (Difficulties with N-processes-locks.)

[edit] Fetch-and-add

Is an atomic operation that can be implemented in hardware. Usually as a mnemonic for the assembler programmer.

Pseudocode description:

fetch-and-add(common, local)
   local <- common
   temp <- common
   common <- temp + 1
   return local
  • Can more easily be used for N-processor locks than test-and-set.
  • XADD since i486

[edit] Compare-and-swap

Is an atomic operation that can be implemented in hardware. Usually as a mnemonic for the assembler programmer.

Compares a variable, and if it is set to a specific value, it is changed to another value. Otherwise nothing happens.

Pseudocode description:

compare-and-swap(variable, oldval, newval)
   temp <- variable
   if variable = oldval
      variable <- newval
   return temp
  • Can more easily be used for N-processor locks than test-and-set.
  • CMPXCHG8B, CMPXCHG16B since the pentium processors, responsible for the 0xF00FC7C8 bug

[edit] Dining philosopers

A group of philosophers wish to eat a bowl of spaghetti. There are N forks and N philosophers. They sit in a circle around a table so that philosopher I's left fork is the philosopher (I+1 modulo N)s right fork. Each philosopher can only eat from the bowl if they have two forks. A philosopher may only perform the following tasks:

  1. think (NCS)
  2. try to grab two forks, one at a time (semaphores or whatever)
    • At most one philosopher may hold a fork
    • If a philosopher tries to pick up a fork that is not on the table, she will be confused (blocked) and wait until the fork is there and then grab it
  3. eat (CS)
  4. return the forks, and then go thinking again

Problem: In which order should the philosophers pick up the forks so that it is guaranteed that there is no possibility of starvation? Philosophers never starve unless they are confused for all eternity.

Solutions to the problem:

  • Introduce the notion that they have to enter the dining room first, one at a time.
  • Introduce asymmetry in the protocol for grabbing forks. If each philosopher tries to grabs the right fork first, except the last philosopher which instead tries to grab the left fork, it is possible to solve the problem.
  • Removing one philosopher from the table, leaving one seat empty.
  • Have the philosophers actions be ruled by randomness and make probabilistic proof.

Note: There is also a proof that there can be at most one philosopher holding a fork.

[edit] Proof of Dining Room

Solution: K philosophers can be in the room at any time. If more than K philosophers wish to be in the room, they will form a queue outside the room (obviously, the dining room is something like a strong semaphore). When a philosopher leaves the room, if there are any philosophers standing in queue, the first philosopher in the queue will enter the room. We also decide that each philosopher will try to grab the left fork first.

Proof:

  1. If a philosopher is confused from trying to grab the left fork, it means that the philosopher to the left has successfully grabbed her right fork, and thus also the left fork, since they all tries to grab the left fork first. This means that the philosopher is eating or is just about to begin eating. This in turn, means that the philosopher holding the fork will soon let go of it. Thus, eventually each philosopher will have her left fork.
  2. If a philosopher is confused from trying to grab her right fork, it means that the philosopher to the right has successfully grabbed her left fork. If we (because we wish to make a proof by contradiction) assume that the philosopher to the right is forever confused from never being able to also pick up her right fork, this means that the philosopher two seats to the right is also confused from having to wait at her right fork, and so on. Since there are only at most K philosophers in the room, this means that for philosopher K-1, the philosopher to the right is not in the room, and can thus not hold any forks. So, eventually, the philosopher will also have her right fork.
  3. A hungry philosopher standing in queue to enter the room will not starve, since all philosophers in the room will eventually have their forks as shown above, then eat, and then leave the room.

Cases 1-3 covers all possibilities of starvation.

[edit] Proof of Asymmetric Philosopher

Solution: Any philosopher sitting on the first chair will grab her right fork first. Every other philosopher will try to grab their left forks first.

Proof:

  1. If the philosopher is confused from trying to grab her left fork, it means that the philosopher to the left has successfully grabbed her right fork. If we assume that the philosopher to the left is also confused, then it means that the philosopher two seats to the left is also confused, and so on. Since the philosophers sit in a ring, someone will be sitting to the left of the first philosopher. Only If the philosopher sitting besides the first philosopher is forever confused, then everyone starves.
  2. The same applies to the right hand side.
  3. A philosopher sitting besides the first philosopher is confused from trying to grab a fork that the first philosopher has taken. That the first philosopher has taken the fork means that she holds both forks, since she tried to grab the other fork first. If the first philosopher holds two forks, then she is about to eat, or is eating. This means she will eventually be finished eating.
  4. If the first philosopher is confused from trying to grab any of her two forks, then the other philosopher will soon return her forks, since that philosopher hold both forks (like case 3, but opposite).

Lemma 1 and 2 combined with proof 3 and 4 suggest the conclusion that no starvation may occur.

The Empty Seat-proof is similar: The first philosopher never confuses anyone, since she is never at the table.

[edit] Proof of The Infinite Number of Random Philosophers

Solution: An infinite number of philosophers sit at the table. Each one will grab with equal probability the left or the right fork first.

Proof

  • The probability that everyone will try to grab the left fork at once is lim x -> infinity, (1/2)^x = 0. The same applies to the right fork.
  • If the philosopher is confused from grabbing the left fork, then for her to starve, the only possibility is that every other philosopher is also confused from grabbing the left fork. The same applies to the right fork.

Thus, no one will starve.

[edit] Deadlock

When two or more processes have blocked/locked themselves against each other.

Example: Two semaphores S1 and S2 are needed to to perform action A. S1 is held by process P, S2 is held by process Q. Neither one of them will let go of any semaphore. Both gets blocked by the others semaphore.

[edit] Livelock

A livelock is when two or more processes is locked in a loop where no useful computation is performed. Similar to deadlock, but the processes are running/ready.

[edit] Starvation

A process never gets to use a resource it needs.

  • This may happen because of unfair interleaving.
  • Result from liveness property failures.

[edit] Liveness property

A condition that must eventually come true.

Examples

  • A train leaving from station A will eventually arrive at station B. (Meaning it will not get stuck forever somewhere.)
  • Eventually, process P will enter the critical section. (Liveness property as guard against starvation.)

[edit] Safety property

A condition that is always true.

Examples

  • It is always true that train A and B never smash into each others.
  • It is always true that only one process is in the critical section.

[edit] State diagram

A diagram over states that a program with all its processes can be in.

  • Used by humans when the number of states a program can be in is small.
  • Used by machines ("model checkers") to check that a program is free from concurrency failure. (Does liveness and safety properties hold true in all states?)

[edit] Process barrier

A process barrier is a code construct that receives processes and keeps them blocked until all the processes have arrived at the process barrier. The processes are then unblocked and continue.

Implementation using two general semaphores for an N-process barrier:

  • Mergesort as example
Variables
   integer array A
   general semaphore S <- (0, {empty_set})
   general semaphore P <- (0, {empty_set})

Process N
   sort Nth part of A
   signal(S)
   wait(P)

Process merger
   Integer I
   for I from 0 to N do         # wait until everyone has "signaled that they are complete"
      wait(S)
   for I from 0 to N do         # "signal to everyone" that they can continue
      signal(P)
   merge all parts of A

[edit] Dekkers algorithm

So fundamental it has its own section in this article.

  • Only uses load/store to memory as atomic operations
  • Guarantees mutual exclusion, free from starvation, free from deadlock
  • Only works for two processes
  • Advantage: Does not need test-and-set and is therefor highly portable between different languages and machine architectures.
  • Disandvantage: Uses busy waiting instead of process suspension which suggests minimal time should be used within the CS.
Variables
   boolean wantp <- false, wantq <- false
   integer turn <- 1

Process P
   NCS                                        # non-critical section (progress assumption does not hold)
   wantp <- true                              # signal that we want to enter
   while wantq                                # while the other process also wish to enter, do
      if turn = 2                                 # if it is Qs turn to be in the CS
         wantp <- false                               # then let Q enter CS
         await turn = 1                               # and wait for our turn
         wantp <- true                                # when it is our turn, we wish to enter CS!
   CS                                         # critical section
   turn <- 2                                  # "its your turn now!"
   wantp <- false                             # we dont want to be in the critical section

Process Q
   NCS
   wantq <- true                              # pre-protocol
   while wantp                                #
      if turn = 1                             #
         wantq <- false                       # all of this..
         await turn = 2                       # is pre-protocol..
         wantq <- true                        #
   CS
   turn <- 1                                  # post-protocol
   wantq <- false                             #
  • Omitted the loop forever

[edit] Temporal logics

Temporal logics is used to reason about programs as they execute. More specific, it deals with logic when variables change their values (as a result of program execution.)

Four methods to prove things

  1. A scenario (can only be used to show that something specific may happen)
  2. State diagrams (can be used by humans for simple proofs, by enumeration of all possible states)
  3. Simple logic proofs of mutual exclusion (shown below)
  4. Proofs using temporal logics for liveness and safety properties (..)

Stuff that can be reasoned about:

  • Mutual exclusion
  • Liveness properties
  • Safety properties
  • Progress

General receipt for success:

  1. Identify the invariants
  2. Build lemmas from the invariants
  3. Use the lemmas to make proofs
    • Often, by assuming the opposite is true and arriving at a paradox/contradiction
    • One can also try proof by induction (show base case (n low, typically 0 or 1) is true, show that assuming case n holds then case n+1 also holds, by induction you have your proof!)

[edit] Instruction pointers ("control pointers")

  • p4 means "process P is at statement 4"
  • p6..12 is shorthand for p6 OR p7 OR p8 OR ... OR p12
    • Alternatively: Process P is at any of the lines 6 to 12.

[edit] Leads to

Operator symbol: ->

Truth diagram:

   A   B   |   A -> B
  ---------+------------
   F   F   |     T
   F   T   |     T
   T   F   |     F
   T   T   |     T

[edit] Equivalence

Operator symbol: <->

  • not (A and B) <-> (not A or not B) this is DeMorgan's law
  • Equivalence has the same truth diagram as XNOR (non-equivalence is XOR)

[edit] Always

The always operator is the box. []

Meaning: Always, from now on, X is true. (Meaning of "from now on" -> See example 4 below)

Examples:

  1. Always not ( P4 and Q4 ) It is never true that both P and Q is at statement 4 (the critical section)
  2. Always #CS <= 1 (It is always true that at most one process is in the critical section)
  3. Always (p2 -> eventually P4) (Always when we reach P2, we will eventually reach P4)
  4. p2 -> Eventually always p5 (If program P reaches statement 2 it will eventually forever stay in statement 5.. quite useless, it is just an example.)

[edit] Eventually

The eventually operator is the diamond. <>

Examples:

  • Eventually P4 Eventually process P will come to statement 4

[edit] How to prove mutual exclusion

  1. Identify invariants
  2. Assume that more than two (or N) processes are in the critical section
  3. Identify how this can happen
  4. Show how this would lead to a contradiction

[edit] Mutual exclusion proof of simple stupid algorithm

We will ONLY prove that mutual exclusion holds. Not that it is free from deadlocks (this is actually a livelock?).

Code:

Variables
   boolean wantp <- false, wantq <- false

Process P:
   loop forever
1      NCS
2      wantp <- true
3      await wantq = false
4      CS
5      wantp <- false

Process Q:
   loop forever
1      NCS
2      wantq <- true
3      await wantp = false
4      CS
5      wantq <- false

Invariants

  • p3..5 <-> wantp
  • q3..5 <-> wantq

Theorem: always not (p4 AND q4)

  1. Assume p4 AND q4, we wish to show that this leads to a contradiction
  2. What statements can make it true?
    • Successfully executing p3 when q4
    • Successfully executing q4 when p4
  3. await wantq = false (p3) can only progress iff q4 AND not wantq
  4. This violates the invariant q3..5 <-> wantq
  5. The same is true for Q, but opposite
  6. Proof completed

[edit] Mutual exclusion proof of Dekkers algorithm

We wish to prove that Dekkers algorithm guarantees mutual exclusion. At most one process is in CS at any time.

Code:

Variables
   boolean wantp <- false, wantq <- false
   integer turn <- 1

Process P:
   loop forever
1      NCS
2      wantp <- true
3      while wantq
4         if turn = 2
5            wantp <- false
6            await turn = 1
7            wantp <- true
8      CS
9      turn <- 2
10     wantp <- false

Process Q:
      same as process P but opposite

Each labeled line is an atomic statement, except NCS and CS.

Invariants (these are always true)

  1. turn = 1 OR turn = 2
  2. p3..5 OR p8..10 <-> wantp
  3. q3..5 OR q8..10 <-> wantq
  4. p5..6 OR q7 -> turn = 2
  5. q5..6 OR p7 -> turn = 1

Theorem: always not ( p8 AND q8 )

  1. Assume p8 AND q8, we wish to show that this leads to a contradiction
  2. What can make this happen?
    1. p3 AND q8 AND ( NOT wantq )
      • Contradicts invariant 3
    2. q3 AND p8 AND ( NOT wantp )
      • Contradicts invariant 2
  3. Both possibilities has been disproved as illogical, thus the proof is complete

[edit] Progress

Progress is that a program will eventually reach a certain statement.

Why reason about progress?

  • To prove liveness properties
    • Prove that the program is free from starvation
  • To prove safety properties(?)

Progress through ordinary code:

  • Assignment statements always gets evaluated, eventually
  • Critical sections (CS) are assumed to be free of errors, a process in a CS always leaves it, eventually.
  • Non-critical sections (NCS) are NOT assumed to be free of errors. A process in a NCS might never leave the NCS.

Progress through control statements: In the example below, it does not hold that p1 AND A = 1 -> eventually p2. This is because Q might change A at any time.

Only at transitions between statements in the program are the variables checked, so even if it is true that p1 AND A = 1, it does not mean that it will be true at the instant of transition. Q could have been interleaved and changed the variable A before the transition.

Variables
   A <- 0

Process P
   loop forever
1      if A = 1
2         print "hello"
3      else
4         print "world"

Process Q
   loop forever
1      A <- 0
2      A <- 1

Unless one can prove that p1 -> A = 1, one can NOT make any assumptions on how the program will branch.

This is probably true however: p1 AND always A = 1 -> eventually p2

[edit] Reasoning about progress

We will use the always and the eventually operators.

Stuff like

  1. Always A -> eventually B
  2. Eventually, always A.
  3. The two lines above when combined makes B true, eventually.
  • (Do not forget the truth diagram of the -> operator.)

Why?

  • By adding always and eventually to our expressions and making assumptions, we can reason somewhat like this: "If those variables always are set to these values, what happens then?"

[edit] Assumptions

Make assumptions that some things are always true (despite the fact they will not always be true) and then use those assumptions to prove things. (This section does not prove things, it just shows an example of assumption.)

Variables
   boolean B <- false
   integer I <- 3

Process Q
   loop forever
1     if not B
2        print "changing it back!"
3        B <- true
4        I <- 3

Process P
   loop forever
1     NCS
2     await B
3        I <- 4
4        B <- false
  • Assume that processes eventually leave the NCS (not true, but assume)

Lemma: always B -> eventually I = 4

  1. P leaves the NCS by assumption
  2. if B is always true, P pass through the await statement and then assigns 4 to I

Lemma: always not B -> eventually I = 3

  1. Q will enter the if-clause and assigns 3 to I

(Again, remember the truth diagram of the -> operator)

[edit] Dekkers algorithm is free from starvation

Page 82.

[edit] Semaphores

Semaphores are very crude constructs for limiting the number of processes that may use a shared resource at once. Semaphores has two operations: signal and wait.

Semaphores are initialized with two values:

  • S.V = How many can use the semaphore at once
  • S.L = The processes that is blocked on the semaphore
Wait(S)
   if S.V > 0
      S.V <- S.V - 1
   else
      add this process to S.L
      process.state <- blocked

Signal(S)
   if S.L is empty
      S.V <- S.V + 1
   else
      remove a random process from S.L                   # strong semaphores use queues, not sets
      process.stare <- ready

[edit] Binary semaphore

Also called mutex

  • Initialized to S <- (1, empty_set)
  • Undefined behavior if signal is called when S.V = 1

[edit] Strong semaphore

A semaphore that has a queue of processes that waits for it to become free. Can if used correctly guarantee freedom of starvation.

Compare with ordinary semaphores.

[edit] Busy-wait semaphore

Also called a spinlock.

Pseudocode definition:

wait(s)
   await s > 0          # while loop
   s <- s - 1

signal(s)
   s <- s + 1
  • Does not guarantee freedom from starvation (if scheduler is unfair, and also if one wish to be STRICT with the temporal logics)
  • Useful when each process has its own CPU, or when contention is low.
  • (Used in Linux kernel when it is known that a critical section will very-very-very soon be OK to enter.)

[edit] Proving mutual exclusion for semaphores

Semaphore invariants: If a semaphore S is initialized as S = (K, empty_set) then

  • S.V >= 0
  • S.V = K + #signal(S) - #wait(S)

Page 112 - 113: Proof of mutual exclusion. (READ)

Add more details here

[edit] Split semaphores

If two semaphores work in tandem so that the following holds true

S1.V + S2.V = N

Then the semaphores are called a split semaphore.

See 118-119: Bounded buffer.

  • two semaphores: NotFull and NotEmpty.
  • NotFull.V + NotEmpty.V = N N is the size of the buffer
Variables
   Semaphore NotFull = (N, empty_set)
   Semaphore NotEmpty = (0, empty_set)
   a buffer

Process producer
   loop forever
       wait(NotFull)
       put item on buffer
       signal(NotEmpty)

Process consumer
    loop forever
       wait(NotEmpty)
       remove one item from buffer
       signal(NotFull)

In the code above, at most N items may be placed on the buffer before producer will be blocked by NotFull. When the buffer size reaches zero, consumer will block on NotEmpty.

(Asynchronous communication channel created using a split semaphore.)

[edit] Monitors

Monitors are synchronized objects (perfect for object oriented languages) for which the following is true

  • Only one process at a time may use any of the monitors methods
  • Only one process at a time may change the monitors fields (variables)

Also

  • Always [p 146 principles of conc dist prog ben-ari] the monitors fields are private, non-accessible from the outside.
  • Monitors will block processes trying to access them if there is another process currently using the monitor.

Queues for blocked processes?

  • Some monitors store their blocked processes in queues (just as strong semaphores do)
  • Others do not. Depending on implementation, starvation might be a problem.
  • The book pretty much assumes that monitors has queues for storing blocked processes, after stating the above.

[edit] Condition variables

Condition variables will be evaluated as part of deciding if the monitor should block a process or not.

WaitC(condition variable)

  1. Puts the caller into a FIFO-queue
  2. unlocks the monitors lock, allowing other processes to enter

SingalC(condition variable)

  1. If the FIFO-queue is not empty:
    1. The first process on the queue is unblocked (and removed from the queue)

WaitC(cond) and SingalC(cond) may be used inside monitors to handle blocking and unblocking of processes that enters them. Compare with how protected objects handle this (see below).

[edit] Protected objects

Protected objects are like monitors but

  • Has queues for storing blocked processes (Is this always true in all implementations???)
  • Has guards at their function clauses that will block processes if these guards does not evaluate as true.
    • Avoids the usage of waitC and signalC in monitors
protected object Vaab_And_Solvo
   enumeration RobotType <- { Vaab | Solvo }

   integer vaabs <- 0
   integer solvos <- 0
   set of ProcessTypes working_robots

   #### robots enter the repair house ###                  # these are guards
   operation enter(ProcessType P, RobotType RT) when ( solvos < 2*vaabs-1 ) and \
                                                        ( vaabs < 2*solvos-1 ) and \
                                                        ( vaabs + solvos <= 20 )
      working_robots <- worker_robots union {P}
      if RT = Vaab
         vaabs <- vaabs + 1
      else
         solvos <- solvos + 1

   #### robots leave the repair house ####
   operation leave(ProcessType P, RobotType RT) when ( solvos = vaabs ) OR \
                                                    ( ( solvos-1 < 2*vaabs ) and \
                                                    ( vaabs-1 < 2*solvos ) )
      working_robots <- working_robots - {P}
      if RT = Vaab
         vaabs <- vaabs - 1
      else
         solvos <- solvos -1

Above is solution to question 4 at exam 021025, using a protected object instead of a monitor.

[edit] Channels

[edit] Synchronicity

  • Synchronous message passing = a zero-sized buffer is used. Both processes that communicate must synchronize their send and receive operations.
  • Asynchronous message passing = a buffer is used to store the messages. The receiving process can read received messages a while after they have been received. (Blocking if the buffer gets filled up, or if the buffer becomes empty.)

[edit] Addressing the messages

When sending messages over a channel, one can select different solutions for addressing.

  • anonymous sender (receiver can not reply, unless sender sent a reply channel)
  • via intermediate (registered processes in erlang: sending to an atom instead of a Pid)
  • probably simplest to default include the senders ID in all messages

(This is probably not on the exam, but maybe it is interesting.)

[edit] How channels works

  • Channels are typed (just like in JR)
  • Channels may be sent through channels (Example: "Here, reply via this one when you are done.")

[edit] Selective input

  • inni and [] in JR.

[edit] Pseudocode example

  • A => B means "receive message from channel A and store the message in B" (make sure they has correct type)
  • A <= B means "send B on channel A"
  • either, or, or.. is selective receive
Variables
   channel ch1 of integer
   channel ch2 of boolean
   channel ch3 of (integer, channel of (integer, boolean))

   integer i <- 0
   integer j
   boolean b <- false
   boolean c
   channel external_channel of (integer, boolean)

Process P
   loop forever
      either
         ch1 => j 
         i <- i + j
      or
         ch2 => c
         if c
            b <- not b
      or
         ch3 => (j, external_channel)
            i <- i + j
            external_channel <= (i, b)

Explanation: When someone sends a message to ch3 they receive the integers and boolean states that has been previously stored (ch1 and ch2 modulates the process fields i and b). Just a result from random doodling! :P

This is an example of anonymous sender, using a reply channel to get the results.

[edit] Erlang example

       ...
       receive 
          {From, output, Msg} -> do something
          {From, kaos}        -> do something
          {_, Random, other}  -> do something
       end;
       ...

Channels are not typed per se. Atoms, assigned variables, wildcards and the general structure of the message is used instead.

[edit] Spaces / Linda

Spaces are used as a form of message passing between processes. Processes can send message to other processes, that at a much later time starts running and receives the messages.

  • A message is a typed tuple of some sort
  • Just like in lab3. (Except Erlang ignores much of the type-thingies.)

[edit] API

  • postnote/out()
    • Stores a tuple in the space.
  • removenote/in()
    • Might block!!!
    • Receives a tuple from the tuplespace. The types need to match the asked-for tuple's types, with possible "any"-wildcards.
  • readnote/rd()
    • Same as removenote except it does not remove the tuple, it just copies it.

It feels like its a bit pointless to describe how it works in more detail. We all know it already.

[edit] Random codes

Just random code examples...

[edit] Implementing a binary semaphore using erlang message passing

  • Like exam 021025 question 2, except uses erlang message passing instead of a protected object.
  • By accident made it a strong binary semaphore :d
-module(binsem).
-export([new/0, signal/1, wait/1]).

new() ->
   spawn_link(fun() -> semaphoreProgram(true, []) end).

signal(S) ->
   Ref = make_ref(),
   S ! {self(), Ref, signal},
   receive
      Ref ->
         ok
   end.

wait(S) ->
   Ref = make_ref(),
   S ! {self(), Ref, wait},
   receive
      Ref ->
         ok
   end.

unblockProcess([]) ->
   [];
unblockProcess([{From, Ref}|Blocked]) ->
   From ! Ref,
   Blocked.

semaphoreProgram(Boolean, BlockedProcesses) ->
   receive
      {From, Ref, signal} ->
         From ! Ref,
         semaphoreProgram(true, unblockProcess(BlockedProcesses));
      {From, Ref, wait} ->
         if Boolean == false ->
            semaphoreProgram(false, lists:append(BlockedProcesses,{From, Ref}) );
         true ->
            From ! Ref,
            semaphoreProgram(false, BlockedProcesses)
         end;
      stop ->
            ok
   end.
Personal tools
0