Skip to content

Instantly share code, notes, and snippets.

@sandys
Created October 10, 2012 09:16
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save sandys/3864298 to your computer and use it in GitHub Desktop.
Save sandys/3864298 to your computer and use it in GitHub Desktop.
Coursera operations management notes

S1 S2 S3 C1 1 3+5 4 10 1/1 1/8 1/4 1/10 process cap = 1/10 flow rate = 1/10 CT = 10 util = 1/10 8/10 4/10 10/10
LC = 13 idle time = 10-1 + 10-8 + 10-4 + 10-10 = 9 + 2 + 6 + 0 = 17 avg LC = 13/13+17 cost of labor = 123/ (1/1060)

R= 21.6/min T = 5 mi I =

cogs = 110050000 rev = 155000000 margin = cogs/rev = 0.71 i = 20000000

ic = 500.710.35 = 12.425 it = 5.5025

25 -> screening (0.4 fail = 10) 15 -> proto -> testing (0.5 fail = 7.5) 7.5 -> focus (0.65 fail= 4.875) 2.625 ->final (0.25 fail=0.656) 1.96

, screen,    proto ,  test ,  focus ,  final 

pt , 2 , 100 , 48 , 2 , 3 res ,

 screen	cad	program	  test 	  focus 	  final 

pt 2 4 96 48 2 3 res 3 4 9999 6 1 1 cap 1.5 1 104.15625 0.125 0.5 0.3333333333 cap/days (8hour working) 12 8 833.25 1 4 2.6666666667 cap/week (5 day working) 60 40 4166.25 5 20 13.3333333333 demand 25 15 15 7.5 2.625 1.96 impl utili 16.6666666667 15 0.1440144014 60 5.25 5.88 highest

  • Four aspects of operations - cost,quality,heterogeneity,time

  • inflow vs outflow graph - vertical distance is queue length (called inventory here) and horizontal distance is flow time

    • metrics - flow units (people, cheese, bread, etc.), flow rate (customers per unit time - also the slope of the cumulative inflow and outflow), flow time , inventory
    • processing time and activity time are the same - time taken to do a group of activities in a queue
  • Terminology

    • processing time = time taken by worker to do one task
    • capacity = 1/processing time. how many units can be processed by worker in a unit of time
    • bottleneck = process step with lowest capacity
    • process capacity = min (capacities)
    • flow rate = min(demand rate, process-capacity) (is it also the slope ?)
    • utilization = flow rate/ capacity (bottleneck will have 100% utilization)
    • flow time = horizontal distance
    • inventory = vertical distance
    • implied utilization = demand/capacity . can be greater than 100% if we have more demand than capacity.
      • highest implied utilization would be the bottleneck
  • Labor productivity

    • capacity = # resources/processing time (same as 1/processing time * #resources)
    • process capacity = min (of all capacities)
    • cycle time = 1/flow rate (how much time to exit 1 unit)
    • direct labor content = sum(processing times)
    • direct idle time = (CT-p1) + (CT-p2) + ... [where CT= cycle time] . This alone should never be taken as a metric of effectiveness, but always in conjunction with DLC
    • average labor utilization = direct labor content / (direct labor content + direct idle time)
    • cost of direct labor = total wages per unit time of all resources /flow rate per unit time
  • Importance of labor costs

    • a large part of costs in manufacturing are from purchasing from suppliers.
    • if we however roll up the suppliers own costs into our P&L, we see that labor costs play an increasingly significant role
    • top manufacturers work with suppliers to reduce their labor costs.
  • Multiple flow units

    • here the notion of lowest capacity may no longer be the bottleneck, because the flow rate might be such that it is not taking time to service it.
  • Little's law

    • Inventory = Flow rate * Flow time (coming from the slope)
      • any two can be chosen by management, the third is given by calculation
      • holding throughput(FR) constant, we can only reduce inventory by reducing flow time
      • flow time is hard to observe
      • questions which ask you to compute I (given R and T) are usually to estimate new business. For existing business, this is very rare.
  • Inventory Turns = cost of goods sold/inventory

    • Derived from Little's law .. assuming I = total inventory and FR = cost of goods sold/year . inventory turns = 1/flow time.
    • use COGS not revenue
    • Inventory turns is used to calculate various inventory costs
      • per unit Inventory costs = annual inventory costs/inventory turns. Therefore holding inventory is expensive. (NOTE: apply this to product development)
d <- data.frame(PT=c(3,2,5,1))
d <- data.frame(d, capacity=1/d$PT)
d$flow_rate <- min(d$capacity) # no demand numbers here
d <- data.frame(d, process_capacity=min(d$capacity))
d <- data.frame(d, cycle_time=1/d$process_capacity)
d <- data.frame(d, dlc=sum(d$PT))
d <- data.frame(d, idle_time=d$cycle_time-d$PT)
d <- data.frame(d, total_idle_time=sum(d$idle_time))
d <- data.frame(d, total_labor_content=sum(d$PT))
d <- data.frame(d, labor_util=d$total_labor_content/(d$total_labor_content+d$total_idle_time))
d <- data.frame(d, util=d$flow_rate/d$capacity) # bottleneck will have 100% utilization
d <- data.frame(d, avg_util=sum(d$util)/length(d$util))
# philadephia hospital. 10 births/day. 80% easy deliveries = 2 day stay. 20 % hard deliveries = 5 days stay. average occupancy
d <- data.frame(prob=c(0.8,0.2), stay=c(2,5))
d <- data.frame(d,inv=10*d$stay)
d <- data.frame(d,res=d$prob*d$inv)
total <- sum(d$res)
#session 7
d <- data.frame(res=c("file", "foreign", "d1", "d2", "print"),PT=c(3,20,15,8,2))
d <- data.frame(d, num_res=c(1,2,3,2,1))
d <- data.frame(d, 60*d$num_res/d$PT)
demand <- data.frame(foreign=3, regular=11, ez=4)
#session 7
# d <- d[,-5] to delete a column
d <- cbind(PT=c(3,20,15,8,2))
rownames(d) <- c("file","foreign","d1", "d2","print")
d <- cbind(d, num_res=c(1,2,3,2,1))
d <- cbind(d, 60*d[,"num_res"]/d[,"PT"])
colnames(d)[3] <- "cap" #oops
d <- cbind(d,for_dem=c(NA,NA,NA,NA,NA))
d <- cbind(d,reg_dem=rep(NA,nrow(d)))
d <- cbind(d,ez_dem=rep(NA,nrow(d)))
d["file","for_dem"] = 3
d["file","reg_dem"] = 11
d["file","ez_dem"] = 4
d["foreign","for_dem"]=3
d["d1","for_dem"]=3
d["d1","reg_dem"]=11
d["d2","ez_dem"]=4
d["print","ez_dem"]=4
d["print","reg_dem"]=11
d["print","for_dem"]=3
d <- cbind(d,total_dem=rowSums(d[,4:6],na.rm=TRUE))
d <- cbind(d,impl_util=d[,"total_dem"]/d[,"cap"])
#the above model assumes that processing time is uniform for all kinds of demand. that might be incorrect
#minute of work model
d <- cbind(PT=c(3,20,15,8,2))
rownames(d) <- c("file","foreign","d1", "d2","print")
d <- cbind(d, num_res=c(1,2,3,2,1))
d <- cbind(d, time_avail=60*d[,"num_res"])
d <- cbind(d,for_dem=c(NA,NA,NA,NA,NA))
d <- cbind(d,reg_dem=rep(NA,nrow(d)))
d <- cbind(d,ez_dem=rep(NA,nrow(d)))
d["file","for_dem"] = 3
d["file","reg_dem"] = 11
d["file","ez_dem"] = 4
d["foreign","for_dem"]=3
d["d1","for_dem"]=3
d["d1","reg_dem"]=11
d["d2","ez_dem"]=4
d["print","ez_dem"]=4
d["print","reg_dem"]=11
d["print","for_dem"]=3
d <- cbind(d,for_tim=rep(NA,nrow(d)))
d <- cbind(d,reg_tim=rep(NA,nrow(d)))
d <- cbind(d,ez_tim=rep(NA,nrow(d)))
effort <- matrix(rep(3),3)
rownames(effort) <- c("for","reg","ez")
colnames(effort)[1] <- "file_eff"
effort <- cbind(effort, for_eff=rep(20,3))
effort <- cbind(effort, d1_eff=rep(15,3))
effort <- cbind(effort, d2_eff=rep(8,3))
effort <- cbind(effort, print_eff=rep(2,3))
k <- t(effort) # transposing, because I cant really do matrix multiplication here
k[is.na(k)] <- 0
d <- cbind(d,k*t(effort))
colnames(d)[7] = "for_eff"
colnames(d)[8] = "reg_eff"
colnames(d)[9] = "ez_eff"
d <- cbind(d,total_eff=rowSums(d[,7:9],na.rm=TRUE))
d <- cbind(d,impl_util=d[,"total_eff"]/d[,"time_avail"])
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment