311:
is known in principle, in practice this is not the case. Many simulators comprise tens of thousands of lines of high-level computer code, which is not accessible to intuition. For some simulations, such as climate models, evaluation of the output for a single set of inputs can require millions of
144:
The basic idea of this framework is to model the computer simulation as an unknown function of a set of inputs. The computer simulation is implemented as a piece of computer code that can be evaluated to produce a collection of outputs. Examples of inputs to these simulations are coefficients in
571:), which focus on reducing the error in the parameters, cannot be used. Replications would also be wasteful in cases when the computer simulation has no error. Criteria that are used to determine a good experimental design include integrated mean squared prediction error
660:. Matrix inversion of large, dense matrices can also cause numerical inaccuracies. Currently, this problem is solved by greedy decision tree techniques, allowing effective computations for unlimited dimensionality and sample size
445:
755:
Fehr, Jörg; Heiland, Jan; Himpe, Christian; Saak, Jens (2016). "Best practices for replicability, reproducibility and reusability of computer-based experiments exemplified by model reduction software".
131:
that represents our prior belief on the structure of the computer model. The use of this philosophy for computer experiments started in the 1980s and is nicely summarized by Sacks et al. (1989)
54:
are constructed to emulate a physical system. Because these are meant to replicate some aspect of a system in detail, they often do not yield an analytic solution. Therefore, methods such as
572:
132:
553:
658:
309:
527:
350:
277:
228:
618:
485:
465:
370:
248:
199:
179:
596:
Unlike physical experiments, it is common for computer experiments to have thousands of different input combinations. Because the standard inference requires
150:
379:
567:
for parametric models. Since a
Gaussian process prior has an infinite dimensional representation, the concepts of A and D criteria (see
665:
279:
are vector quantities, and they can be very large collections of values, often indexed by space, or by time, or by both space and time.
716:
743:
85:: Characterize the uncertainty present in a computer simulation arising from unknowns during the computer simulation's construction.
809:
804:
721:
681:
82:
691:
585:
55:
581:
532:
313:
36:
623:
696:
564:
161:. On the basis of seeing our simulator this way, it is common to refer to the collection of inputs as
139:
40:
32:
323:
The typical model for a computer code output is a
Gaussian process. For notational simplicity, assume
575:
814:
686:
488:
116:
51:
783:
765:
706:
701:
146:
128:
127:. In the realm of computer experiments, the Bayesian interpretation would imply we must form a
285:
739:
597:
498:
492:
97:
78:
Computer experiments have been employed with many purposes in mind. Some of those include:
775:
487:
is the covariance function. Popular mean functions are low order polynomials and a popular
373:
123:
where all evidence about the true state of the world is explicitly expressed in the form of
711:
326:
253:
204:
100:: Combine multiple simulations and physical data sources into a complete predictive model.
88:
603:
568:
470:
450:
355:
233:
184:
164:
103:
67:
63:
59:
661:
798:
124:
787:
352:
is a scalar. Owing to the
Bayesian framework, we fix our belief that the function
153:. It is natural to see the simulation as a deterministic function that maps these
135:
70:
are often used because experimentation on an earth sized object is impossible.
676:
120:
27:
is an experiment used to study a computer simulation, also referred to as an
779:
28:
91:: Discover the underlying properties of the system from the physical data.
94:
Bias correction: Use physical data to correct for bias in the simulation.
66:
is used to make inferences about the system it replicates. For example,
440:{\displaystyle f\sim \operatorname {GP} (m(\cdot ),C(\cdot ,\cdot )),}
115:
Modeling of computer experiments typically uses a
Bayesian framework.
563:
The design of computer experiments has considerable differences from
770:
106:: Find inputs that result in optimal system performance measures.
629:
600:
of a square matrix of the size of the number of samples (
626:
606:
535:
501:
473:
453:
382:
358:
329:
288:
256:
236:
207:
187:
167:
664:, or avoided by using approximation methods, e.g.
652:
612:
547:
521:
479:
459:
439:
364:
344:
303:
271:
242:
222:
193:
173:
736:The Design and Analysis of Computer Experiments
134:. While the Bayesian approach is widely used,
8:
16:Experiment used to study computer simulation
769:
641:
628:
627:
625:
605:
534:
511:
500:
472:
452:
381:
357:
328:
287:
255:
235:
206:
186:
166:
138:approaches have been recently discussed
548:{\displaystyle \nu \rightarrow \infty }
495:, which includes both the exponential (
580:Popular strategies for design include
653:{\displaystyle {\mathcal {O}}(n^{3})}
119:is an interpretation of the field of
7:
181:, the computer simulation itself as
717:Grey box completion and validation
592:Problems with massive sample sizes
542:
14:
529:) and Gaussian covariances (as
43:and other similar disciplines.
647:
634:
559:Design of computer experiments
539:
431:
428:
416:
407:
401:
395:
339:
333:
298:
292:
266:
260:
217:
211:
201:, and the resulting output as
1:
574:and distance based criteria
111:Computer simulation modeling
722:Artificial financial market
31:system. This area includes
831:
682:Uncertainty quantification
83:Uncertainty quantification
692:Gaussian process emulator
620:), the cost grows on the
586:low discrepancy sequences
467:is the mean function and
304:{\displaystyle f(\cdot )}
56:discrete event simulation
734:Santner, Thomas (2003).
582:latin hypercube sampling
522:{\displaystyle \nu =1/2}
780:10.3934/Math.2016.3.261
37:computational chemistry
654:
614:
549:
523:
481:
461:
441:
366:
346:
319:Gaussian process prior
305:
273:
244:
224:
195:
175:
145:the underlying model,
62:solvers are used. A
810:Design of experiments
805:Computational science
697:Design of experiments
662:patent WO2013055257A1
655:
615:
565:design of experiments
550:
524:
482:
462:
442:
367:
347:
306:
274:
245:
225:
196:
176:
157:into a collection of
41:computational biology
33:computational physics
25:simulation experiment
738:. Berlin: Springer.
624:
604:
533:
499:
471:
451:
380:
356:
345:{\displaystyle f(x)}
327:
286:
272:{\displaystyle f(x)}
254:
234:
223:{\displaystyle f(x)}
205:
185:
165:
52:Computer simulations
687:Bayesian statistics
489:covariance function
117:Bayesian statistics
21:computer experiment
707:Monte Carlo method
702:Molecular dynamics
650:
610:
545:
519:
477:
457:
437:
362:
342:
301:
269:
240:
220:
191:
171:
147:initial conditions
129:prior distribution
613:{\displaystyle n}
493:Matern covariance
480:{\displaystyle C}
460:{\displaystyle m}
365:{\displaystyle f}
243:{\displaystyle x}
194:{\displaystyle f}
174:{\displaystyle x}
151:forcing functions
98:Data assimilation
822:
791:
773:
758:AIMS Mathematics
749:
659:
657:
656:
651:
646:
645:
633:
632:
619:
617:
616:
611:
598:matrix inversion
554:
552:
551:
546:
528:
526:
525:
520:
515:
486:
484:
483:
478:
466:
464:
463:
458:
446:
444:
443:
438:
374:Gaussian process
371:
369:
368:
363:
351:
349:
348:
343:
310:
308:
307:
302:
278:
276:
275:
270:
249:
247:
246:
241:
229:
227:
226:
221:
200:
198:
197:
192:
180:
178:
177:
172:
89:Inverse problems
830:
829:
825:
824:
823:
821:
820:
819:
795:
794:
754:
746:
733:
730:
728:Further reading
712:Surrogate model
673:
637:
622:
621:
602:
601:
594:
561:
531:
530:
497:
496:
469:
468:
449:
448:
378:
377:
354:
353:
325:
324:
321:
312:computer hours
284:
283:
252:
251:
232:
231:
203:
202:
183:
182:
163:
162:
113:
76:
49:
17:
12:
11:
5:
828:
826:
818:
817:
812:
807:
797:
796:
793:
792:
764:(3): 261–281.
751:
750:
744:
729:
726:
725:
724:
719:
714:
709:
704:
699:
694:
689:
684:
679:
672:
669:
649:
644:
640:
636:
631:
609:
593:
590:
569:Optimal design
560:
557:
544:
541:
538:
518:
514:
510:
507:
504:
476:
456:
436:
433:
430:
427:
424:
421:
418:
415:
412:
409:
406:
403:
400:
397:
394:
391:
388:
385:
361:
341:
338:
335:
332:
320:
317:
300:
297:
294:
291:
268:
265:
262:
259:
239:
219:
216:
213:
210:
190:
170:
112:
109:
108:
107:
104:Systems design
101:
95:
92:
86:
75:
72:
68:climate models
64:computer model
60:finite element
48:
45:
15:
13:
10:
9:
6:
4:
3:
2:
827:
816:
813:
811:
808:
806:
803:
802:
800:
789:
785:
781:
777:
772:
767:
763:
759:
753:
752:
747:
745:0-387-95420-1
741:
737:
732:
731:
727:
723:
720:
718:
715:
713:
710:
708:
705:
703:
700:
698:
695:
693:
690:
688:
685:
683:
680:
678:
675:
674:
670:
668:
666:
663:
642:
638:
607:
599:
591:
589:
587:
583:
578:
576:
573:
570:
566:
558:
556:
536:
516:
512:
508:
505:
502:
494:
490:
474:
454:
434:
425:
422:
419:
413:
410:
404:
398:
392:
389:
386:
383:
375:
359:
336:
330:
318:
316:
314:
295:
289:
280:
263:
257:
237:
214:
208:
188:
168:
160:
156:
152:
148:
142:
140:
137:
133:
130:
126:
125:probabilities
122:
118:
110:
105:
102:
99:
96:
93:
90:
87:
84:
81:
80:
79:
73:
71:
69:
65:
61:
57:
53:
46:
44:
42:
38:
34:
30:
26:
22:
761:
757:
735:
595:
579:
562:
322:
281:
158:
154:
143:
114:
77:
50:
24:
20:
18:
136:frequentist
815:Simulation
799:Categories
771:1607.01191
677:Simulation
372:follows a
121:statistics
74:Objectives
47:Background
543:∞
540:→
537:ν
503:ν
426:⋅
420:⋅
405:⋅
393:
387:∼
296:⋅
282:Although
29:in silico
788:14715031
671:See also
230:. Both
159:outputs
786:
742:
447:where
155:inputs
784:S2CID
766:arXiv
740:ISBN
584:and
250:and
149:and
776:doi
555:).
491:is
58:or
35:,
23:or
801::
782:.
774:.
760:.
667:.
588:.
577:.
390:GP
376:,
315:.
141:.
39:,
19:A
790:.
778::
768::
762:1
748:.
648:)
643:3
639:n
635:(
630:O
608:n
517:2
513:/
509:1
506:=
475:C
455:m
435:,
432:)
429:)
423:,
417:(
414:C
411:,
408:)
402:(
399:m
396:(
384:f
360:f
340:)
337:x
334:(
331:f
299:)
293:(
290:f
267:)
264:x
261:(
258:f
238:x
218:)
215:x
212:(
209:f
189:f
169:x
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.