Littlefield Technologies Simulation Solution Essays On The Great

Executive Summary

Our team operated and managed the Littlefield Technologies facility over the span of 1268 simulated days. Our team finished the simulation in 3rd place, posting $2,234,639 in cash at the end of the game. We did intuitive analysis initially and came up the strategy at the beginning of the game. And then we applied the knowledge we learned in the class, did process analysis and modified our strategies according to the performance results dynamically. We have reinforced many of the concepts and lessons learned in class and had a better understanding of the operation of the Littlefield Technologies facility and how certain modifications would affect the throughput and lead time.

The Plan – Initial Strategy
Our team’s objective was to maximize the cash generated by the factory over the product lifetime. To achieve this goal, our team did the initial planning by using 50 days of historical data. On the one hand, we ran the regression analysis for demand and anticipated the increasing of the demand in the next 150 days. Based on 50 day’s data, the regression analysis gave us around 8 jobs/day in day 150. We also calculated mean with 2.5 jobs/day, standard deviation with 1.78 and variance with 3.15. We noticed the demand fluctuated a lot. CV is 0.71 (standard deviation/mean). Table 1

On the other hand, we reviewed the utilization, queue size for each machine, checked the revenue, completed jobs and lead time data. We noticed that on Day 40, Day42, and Day 44, machine in station 1 has the utilization more than 90% which means station 1 is a bottleneck. After identifying the bottleneck, We decided to purchase machine in station 1 first on Day 51 to see how this modification action would affect the factory operation.

Decision analysis—Actions & Analysis
Overall, we watched on our factory diligently, every few hours we would monitor it to check the health of the factory. We tried to adjust various parameters not to lose money, at the same time, make money as much as possible for factory. As we have more data being provided, we ranredo regression analysis constantly to forecast the orders . The regression report showed that demand level and it became higher compared with initial forecast. We did the following parameter changes when operating factory: 1. Changing Contract Terms

Ts: the game begins with Contract 1 which makes $1,000/job. In order to make more money, on Day 54, we changed ontract makes $1,500/jobConsidering because the average lead time is less than 0.5 day/job on Day 54. Contract 2 can make extra $500/job, we changed to Contract 2, which makes $1,500/job, to boost profit. . When we could not communicate with each other, for 2 times we also changed contract number to 1 Howeverwhen we noticed that wewe’ve been penalized got penalty andand revenue wais dropped to less than $1000/job in the next few days . Reactively, we changed back to Contract 1 to make sure that we make money at least $1000/job. After an analysis on marginal benefit and marginal cost, But we immediately decided to changed thechoose Ccontract number to 2 as theits speed to make money is much faster. 2. Purchasing Machines

: Since our goal is to maximize the cash position and it is costly to buy a new machine, we decided to invest in a conservative way—we only buy machines when it becomes bottleneck. Be more specific, we use lead time to decide whether capacity is constraint or not. And then, we identify the bottleneck based on utilization. For example, In the beginning, we are conservative to buy machines and wewe only buy machine s when it has becomes bottleneck: more than 90% utilization ,and higher queue, which caused longer lead time and triggered penalty with revenue less than $1000/job. Those reactive e machine buying strategies y happened before Day 135. On Day 135, we changed our conservative machine buying stratifies into aggressive strategies based on did some revenue data analysis: a. If we don’t change batch size, and change to Ccontract number 1, suppose consistent order (in later stage) is 12 with regression analysis Revenue (roughly) = 12 * 1000 * (268-135) = 12*133*1000 = $1,596,000 b. If we change batch size to 1*60, based on day1-50 data, the lead time is always > 0.3, and we could not use contract number 2Contract 2 to increase revenue and still have to use Ccontract number 11 Revenue will be the same as 1 which is $1,596,000 . c. If we buy machine 3 because it’s a bottleneck, without changing anything else, utilization for station 3 will become less which will cause less queue, less waiting time, less lead time, no or less penalty, more revenue. Revenue (roughly) = 12 * 1500 * (268-135) = 12*133*1500 = $2,394,000 Machine cost = 100,000

$2,394,000-$100,000 = $2,294,000 > $1,596,000

According to our analysis, So , c) is a betterthe optimal choice whichchoice, which confirmed our aggressive machine buying strategy since Day 135. And on Day 149, and Day 170, we immediately bought machine for station 2 and 1 again when the stationsit becomes bottle neck or when lead time is more than 0.28 which caused revenue decreased to $1,200. 3. Changing lot size

: We changed lot size for 3 times. On Day 58, we changed to 2 lots/job in order to take Contract number 2 to make earn more money. On Day 64, we changed to 3 lots/job and hope the lead time would decrease. This was a big mistake we made. After this change, we noticed the queue for station #3 is very high, station #3 became bottleneck and our revenue dramatically dropped below $500/job. O So, on Day 69, we had to changed itswitch back to 2 lots/job. We kept alive monitoring and found that this strategy which worked well. 4. Changing the way Station 2 is scheduled

: We changed several times for the scheduling of Station 2. If the queue for Station #3 is high, in order to finish the job quickly, we set it to Priority #4. If the queue for Station #2 is high, we set the priority to 2#2 to increase the initial test. Based on Professor’s feedback, in the long run, it does not matter, so sometimes we usewe changed to FIFO which is default.

The Plan -5. Exit Strategy
After we bought machine for Station #1 on Day 170, our factory runs pretty wellsmoothly and we did not monitor as much frequent as before. When the game was near the end, we were thinking if we couldconsidered selling one 1 machine in Station #1. After discussion, we determined that it is not worth to sell a machine, since the retirement salvage value of the machine is only $10,000 which is way too less low . and deserve . Meanwhile,If we sell, i selling the machinet may also affect the performance of the factory. So, we decided not to sell any machine. In addition, we also tracked team rankings from time to time, and noticed that our speed to make money is faster compared with teams above us which are #1, #2,#3,#4. After doing some analysis roughly, we know we’ll be in #4 for sure and could be in #2 or #3 if lucky which in the end we are #3. Lessons learned

Although we are pleased with our final results compared to the rest of the class, we see there is still a room for improvement. We made couple mistakes, but most importantly we have learned from. Here is a discussion of the right decisions we made and the things that we should have considered. 1. Timing to purchase machined: we boughtOur machine buying strategies are machine based on utilization and /queue size to check if it’s bottle neck. . If in the first place we were able tocan forecast when to buy the machine, we could’ve taken proactive strategies rather than reactive. We figured out that there is a tradeoff between capacity and waiting time. However, we waited until the lead times become so long that we are making little revenue before we buy machines. it’ll help us a lot. 2. Batching size too small : – setup cost?

In one time, we changed batch size to 20 which caused Station #3 a bottleneck with high queue and revenue decreased below $500. If we could do some analysis on lead time with various batch size, we would avoid’ll not make that mistakethis mistake. Based on batching trade off formula learneding in class: At each machine: Flow Time (of each unit) = Setup time + Run time * X (batch size in units), Capacity=X/(Setup time + Run time * X), As X decreases, Flow time decreases (good) and capacity decreases (bad). When utilization increased, the queue increased exponentially which caused waiting time increased exponentially. When choosing an appropriate batch size, there is a trade off between capacity and inventory. It is important to balance those two conflicting objectives. Large batches lead to large inventory; small batches lead to losses in capacity.


Booth #5

Successful organizations today rely heavily on analytics to guide and support operational, tactical, and strategic decision-making. Many of the most successful rely on SAS, which provides a full spectrum of coordinated analytic capabilities, including data integration, statistics, data and text mining, econometrics and forecasting, and operations research (optimization, simulation, and scheduling). SAS helps organizations around the world build analytic models, populate them with relevant data and insights, communicate recommended decisions effectively, and surface these capabilities within accessible, business-oriented interfaces. Visit us to see how SAS can help you understand the past and present, anticipate the future, and make better decisions.

Workshop: Solving Business Problems with SAS Analytics and OPTMODEL | Ed Hughes, Principal Product Manager | Rob Pratt, Senior R&D Manager | David Kraay, Senior Operations Research Specialist

Saturday, 3:00-5:30pm, Room 104b

SAS provides diverse analytic capabilities, integrated with OPTMODEL’s optimization modeling and solution capabilities. OPTMODEL provides unified support for many model types and methods, including an expanding suite of standard solvers and support for customized algorithms.

We’ll demonstrate OPTMODEL’s power and versatility in building and solving optimization models, including the significant improvements resulting from newer features. We’ll emphasize integration with SAS data, analytic, and reporting capabilities, focusing primarily on case studies from a broad range of industries.

Software Tutorial: Building and Solving Optimization Models with SAS | Ed Hughes, Principal Product Manager | Rob Pratt, Senior R&D Manager

Tuesday, 4:30-5:15pm, Room 302

SAS provides a broad spectrum of data and analytic capabilities, including statistics, data and text mining, econometrics and forecasting, and operations research—optimization, simulation, and scheduling. OPTMODEL from SAS provides a powerful and intuitive algebraic optimization modeling language and unified support for building and solving LP, MILP, QP, NLP, CLP, and network-oriented models. We’ll demonstrate OPTMODEL for basic and advanced problems, highlighting its newer capabilities and its support for both standard and customized solution approaches.

Back to top

0 Thoughts to “Littlefield Technologies Simulation Solution Essays On The Great

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *