2884:. To describe this connection consider the optimal recovery setting of Micchelli and Rivlin in which one tries to approximate an unknown function from a finite number of linear measurements on that function. Interpreting this optimal recovery problem as a zero-sum game where Player I selects the unknown function and Player II selects its approximation, and using relative errors in a quadratic norm to define losses, Gaussian priors emerge as optimal mixed strategies for such games, and the covariance operator of the optimal Gaussian prior is determined by the quadratic norm used to define the relative error of the recovery.
1388:
1734:
2865:(IBC), the branch of computational complexity founded on the observation that numerical implementation requires computation with partial information and limited resources. In IBC, the performance of an algorithm operating on incomplete information can be analyzed in the worst-case or the average-case (randomized) setting with respect to the missing information. Moreover, as Packel observed, the average case setting could be interpreted as a
752:
234:
125:, formulating the relationship between numbers computed by the computer (e.g. matrix-vector multiplications in linear algebra, gradients in optimization, values of the integrand or the vector field defining a differential equation) and the quantity in question (the solution of the linear problem, the minimum, the integral, the solution curve) in a
2093:
1820:
Randomisation-based methods are defined through random perturbations of standard deterministic numerical methods for ordinary differential equations. For example, this has been achieved by adding
Gaussian perturbations on the solution of one-step integrators or by perturbing randomly their time-step.
2076:
The boundary between these two categories is not sharp, indeed a
Gaussian process regression approach based on randomised data was developed as well. These methods have been applied to problems in computational Riemannian geometry, inverse problems, latent force models, and to differential equations
185:
Since they use and allow for an explicit likelihood describing the relationship between computed numbers and target quantity, probabilistic numerical methods can use the results of even highly imprecise, biased and stochastic computations. Conversely, probabilistic numerical methods can also provide
2116:
is conditioned on partially-known physics, given by uncertain boundary conditions (BC) and a linear PDE, as well as on noisy physical measurements from experiment. The boundary conditions and the right-hand side of the PDE are not known but inferred from a small set of noise-corrupted measurements.
1824:
Gaussian process regression methods are based on posing the problem of solving the differential equation at hand as a
Gaussian process regression problem, interpreting evaluations of the right-hand side as data on the derivative. These techniques resemble to Bayesian cubature, but employ different
2855:
and collaborators, interplays between numerical approximation and statistical inference can also be traced back to
Palasti and Renyi, Sard, Kimeldorf and Wahba (on the correspondence between Bayesian estimation and spline smoothing/interpolation) and Larkin (on the correspondence between
814:
prior conditioned on observations. This belief then guides the algorithm in obtaining observations that are likely to advance the optimization process. Bayesian optimization policies are usually realized by transforming the objective function posterior into an inexpensive, differentiable
1309:
cannot be computed in reasonable time. Hence, generally mini-batching is used to construct estimators of these quantities on a random subset of the data. Probabilistic numerical methods model this uncertainty explicitly and allow for automated decisions and parameter tuning.
1667:. Methods typically assume a Gaussian distribution, due to its closedness under linear observations of the problem. While conceptually different, these two views are computationally equivalent and inherently connected via the right-hand-side through
1816:, have been developed for initial and boundary value problems. Many different probabilistic numerical methods designed for ordinary differential equations have been proposed, and these can broadly be grouped into the two following categories:
977:
200:
Sources from multiple sources of information (e.g. algebraic, mechanistic knowledge about the form of a differential equation, and observations of the trajectory of the system collected in the physical world) can be combined naturally and
2633:; furthermore, this minimal error is proportional to the sum of cubes of the inter-node spacings. As a result, one can see the trapezoidal rule with equally-spaced nodes as statistically optimal in some sense — an early example of the
2860:
regression and numerical approximation). Although the approach of modelling a perfectly known function as a sample from a random process may seem counterintuitive, a natural framework for understanding it can be found in
2842:
1915:
2089:. As with ordinary differential equations, the approaches can broadly be divided into those based on randomisation, generally of some underlying finite-element mesh and those based on Gaussian process regression.
2869:
in an adversarial game obtained by lifting a (worst-case) minmax problem to a minmax problem over mixed (randomized) strategies. This observation leads to a natural connection between numerical approximation and
1059:
213:
These advantages are essentially the equivalent of similar functional advantages that
Bayesian methods enjoy over point-estimates in machine learning, applied or transferred to the computational domain.
827:. A welcome side effect from this approach is that uncertainty in the objective function, as measured by the underlying probabilistic belief, can guide an optimization policy in addressing the classic
2271:
2726:
2469:
1626:
182:
Hierarchical
Bayesian inference can be used to set and control internal hyperparameters in such methods in a generic fashion, rather than having to re-invent novel methods for each parameter
717:
as this allows us to obtain a closed-form posterior distribution on the integral which is a univariate
Gaussian distribution. Bayesian quadrature is particularly useful when the function
711:
518:
642:
423:
2515:
1814:
1110:
284:
2623:
3557:
Briol, F.-X.; Oates, C. J.; Girolami, M.; Osborne, M. A.; Sejdinovic, D. (2019). "Probabilistic integration: A role in statistical computation? (with discussion and rejoinder)".
469:
2332:
1580:
1307:
2381:
1515:
1146:
2141:
3039:
2419:
1422:
2168:
1275:
1704:
1665:
869:
4824:
4490:
4162:
4103:
1246:
1166:
4828:
3494:
342:
2304:
2062:
2042:
2013:
1984:
1964:
1944:
1544:
1349:
558:
1382:
310:
4437:
Abdulle, A.; Garegnani, G. (2021). "A probabilistic finite element method based on random meshes: A posteriori error estimators and
Bayesian inverse problems".
2746:
2679:
2655:
2559:
2539:
2114:
1482:
1462:
1442:
1226:
1206:
1186:
808:
781:
735:
662:
578:
538:
179:
error estimates (in particular, the ability to return joint posterior samples, i.e. multiple realistic hypotheses for the true unknown solution of the problem)
286:
evaluations of the integrand (shown in black). Shaded areas in the left column illustrate the marginal standard deviations. The right figure shows the prior (
4117:
Abdulle, A.; Garegnani, G. (2020). "Random time step probabilistic methods for uncertainty quantification in chaotic and geometric numerical integration".
790:, a general approach to optimization grounded in Bayesian inference. Bayesian optimization algorithms operate by maintaining a probabilistic belief about
1395:
A large class of methods are iterative in nature and collect information about the linear system to be solved via repeated matrix-vector multiplication
3041:
Operator-Adapted
Wavelets, Fast Solvers, and Numerical Homogenization: From a Game Theoretic Approach to Numerical Approximation and Algorithm Design
2174:
Probabilistic numerical PDE solvers based on
Gaussian process regression recover classical methods on linear PDEs for certain priors, in particular
2755:
1464:. Such methods can be roughly split into a solution- and a matrix-based perspective, depending on whether belief is expressed over the solution
2424:
A later seminal contribution to the interplay of numerical analysis and probability was provided by Albert Suldin in the context of univariate
1835:
4943:
3478:
3049:
4568:
Pförtner, M.; Steinwart, I.; Hennig, P.; Wenger, J. (2022). "Physics-Informed Gaussian Process Regression Generalizes Linear PDE Solvers".
2637:
of a numerical method. Suldin's point of view was later extended by Mike Larkin. Note that Suldin's Brownian motion prior on the integrand
982:
755:
Bayesian optimization of a function (black) with Gaussian processes (purple). Three acquisition functions (blue) are shown at the bottom.
1717:
propagation of the approximation error to a combined Gaussian process posterior, which quantifies the uncertainty arising from both the
140:
Many of the most popular classic numerical algorithms can be re-interpreted in the probabilistic framework. This includes the method of
133:
as the output. In most cases, numerical algorithms also take internal adaptive decisions about which numbers to compute, which form an
2968:
2175:
134:
3195:
193:
Because all probabilistic numerical methods use essentially the same data type – probability measures – to quantify uncertainty over
4808:
4746:
171:
Probabilistic numerical methods promise several conceptual advantages over classic, point-estimate based approximation techniques:
2222:. Precursors to what is now being called "probabilistic numerics" can be found as early as the late 19th and early 20th century.
1830:
187:
2225:
The origins of probabilistic numerics can be traced to a discussion of probabilistic approaches to polynomial interpolation by
2086:
1746:
2848:, seeing the output of a quadrature method not just as a point estimate but as a probability distribution in its own right.
2240:
2688:
2431:
4975:
2929:
2862:
2211:
820:
165:
1829:
regression. This was later improved (in terms of efficient computation) in favor of Gauss–Markov priors modeled by the
4965:
1319:
863:
823:, seeking to obtain a sequence of observations yielding the most optimization progress as evaluated by an appropriate
819:
that is maximized to select each successive observation location. One prominent approach is to model optimization via
1585:
3536:
2934:
4877:"Multigrid with rough coefficients and multiresolution operator decomposition from hierarchical information games"
667:
474:
4970:
2728:
is a real-valued Gaussian random variable. In particular, after conditioning on the observed pointwise values of
760:
141:
4197:"Probabilistic solutions to ordinary differential equations as nonlinear Bayesian filtering: a new perspective"
1387:
583:
364:
51:
2474:
1751:
1064:
240:
2564:
840:
145:
3431:
Hans Kersting; Nicholas Krämer; Martin Schiegg; Christian Daniel; Michael Tiemann; Philipp Hennig (2020).
2955:
2183:
130:
55:
4056:"Statistical analysis of differential equations: introducing probability measures on numerical solutions"
428:
4484:
4156:
4097:
3488:
3044:. Cambridge Monographs on Applied and Computational Mathematics. Cambridge: Cambridge University Press.
2924:
2634:
2425:
2207:
787:
746:
349:
47:
2309:
1549:
1280:
3696:
Mahsereci, M.; Balles, L.; Lassner, C.; Hennig, H. (2021). "Early Stopping without a Validation Set".
164:
prior and likelihood. In such cases, the variance of the Gaussian posterior is then associated with a
4456:
3916:
3150:
2337:
1821:
This defines a probability measure on the solution of the differential equation that can be sampled.
1487:
1115:
2845:
2749:
2120:
354:
228:
153:
149:
126:
59:
27:
3738:"Chapter 8: First-Order Filter for Gradients; chapter 9: Second-Order Filter for Hessian Elements"
2386:
1825:
and often non-linear observation models. In its infancy, this class of methods was based on naive
972:{\displaystyle \textstyle L(\theta )={\frac {1}{N}}\sum _{n=1}^{N}\ell (y_{n},f_{\theta }(x_{n}))}
4908:
4888:
4844:"The algorithm designer versus nature: a game-theoretic approach to information-based complexity"
4818:
4685:
4569:
4545:
4517:
4472:
4446:
4400:
4380:
4349:
4329:
4279:
4259:
4228:
4208:
4144:
4126:
4029:
3997:
3940:
3906:
3873:
3832:
3812:
3772:
3697:
3676:
3584:
3566:
3396:
3352:
3318:
3268:
3250:
3218:
3140:
3109:
3081:
3011:
2993:
2626:
2199:
1710:
1398:
828:
115:
111:
63:
43:
4704:
4420:
Probabilistic solutions to differential equations and their application to Riemannian statistics
2146:
1733:
1251:
197:
they can be chained together to propagate uncertainty across large-scale, composite computations
4768:"A correspondence between Bayesian estimation on stochastic processes and smoothing by splines"
2428:. The statistical problem considered by Suldin was the approximation of the definite integral
1670:
1631:
4939:
4804:
4742:
4589:
4537:
4085:
3932:
3603:
3474:
3176:
3101:
3045:
2964:
2226:
122:
2096:
Learning to solve a partial differential equation. A problem-specific Gaussian process prior
1231:
1151:
4931:
4898:
4855:
4779:
4734:
4675:
4642:
4527:
4464:
4390:
4339:
4269:
4218:
4136:
4075:
4067:
3967:
3924:
3822:
3745:
3576:
3517:
3466:
3308:
3260:
3210:
3166:
3158:
3091:
3003:
2878:
2857:
2658:
2630:
2234:
2187:
2179:
1826:
824:
811:
714:
161:
35:
16:
Scientific field at the intersection of statistics, machine learning and applied mathematics
3858:
315:
3448:
A Probabilistic State Space Model for Joint Inference from Differential Equations and Data
2984:
Oates, C. J.; Sullivan, T. J. (2019). "A modern retrospective on probabilistic numerics".
2874:
2518:
2280:
2219:
2065:
2047:
2018:
1989:
1969:
1949:
1920:
1520:
1325:
751:
543:
206:
4607:
Suldin, A. V. (1959). "Wiener measure and its applications to approximation methods. I".
3376:. 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP).
1357:
289:
4460:
3920:
3154:
1709:
Probabilistic numerical linear algebra routines have been successfully applied to scale
4928:
Optimal estimation in approximation theory (Proc. Internat. Sympos., Freudenstadt, 1976
4080:
4055:
3171:
3128:
2866:
2731:
2664:
2640:
2544:
2524:
2099:
1517:. The belief update uses that the inferred object is linked to matrix multiplications
1467:
1447:
1427:
1318:
Probabilistic numerical methods for linear algebra have primarily focused on solving
1211:
1191:
1171:
856:
793:
766:
720:
647:
563:
523:
2914:: Built on top of PyTorch. It efficiently computes quantities other than the gradient.
4959:
4860:
4843:
4689:
4504:
Chkrebtii, Oksana A.; Campbell, David A.; Calderhead, Ben; Girolami, Mark A. (2016).
4476:
4283:
3944:
3665:
3521:
2899:
2852:
2069:
1738:
848:
844:
344:) Gaussian distribution over the value of the integral, as well as the true solution.
157:
4549:
4404:
4353:
4232:
4148:
3836:
3588:
3433:
Differentiable Likelihoods for Fast Inversion of 'Likelihood-Free' Dynamical Systems
3322:
3272:
3222:
3015:
2893:
2625:. Suldin showed that, for given quadrature nodes, the quadrature rule with minimal
783:
given (possibly noisy or indirect) evaluations of that function at a set of points.
205:
of the algorithm, removing otherwise necessary nested loops in computation, e.g. in
4912:
4733:. Mathematical Surveys and Monographs. Vol. 9. American Mathematical Society.
3133:
Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences
3113:
2871:
2274:
1352:
1228:
is large and cannot be processed at once meaning that local quantities (given some
233:
3764:
3673:
Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI)
121:
Formally, this means casting the setup of the computational problem in terms of a
3470:
3387:
Hennig, Philipp; Kiefel, Martin (2013). "Quasi-Newton methods: A new direction".
98:
numerical algorithm, this process of approximation is thought of as a problem of
78:
the solution to a mathematical problem (examples below include the solution to a
4935:
4763:
4726:
3959:
3737:
2881:
2215:
2203:
852:
39:
4274:
4247:
4223:
4196:
4140:
3971:
3827:
3800:
3749:
3007:
4803:. Computer Science and Scientific Computing. Boston, MA: Academic Press, Inc.
4784:
4767:
4468:
4344:
4317:
4071:
3988:
Wenger, J.; Pleiss, G.; Hennig, P.; Cunningham, J. P.; Gardner, J. R. (2022).
3894:
3854:
3796:
3718:
3292:
2902:: Probabilistic numerical ODE solvers based on filtering implemented in Julia.
2682:
31:
4647:
4630:
4541:
3936:
3105:
3069:
763:, which consist of finding the minimum or maximum of some objective function
737:
is expensive to evaluate and the dimension of the data is small to moderate.
4318:"A probabilistic model for the numerical solution of initial value problems"
4054:
Conrad, P.R.; Girolami, M.; Särkkä, S.; Stuart, A.M.; Zygalakis, K. (2017).
2837:{\displaystyle \textstyle {\frac {1}{12}}\sum _{i=2}^{n}(t_{i}-t_{i-1})^{3}}
4089:
3765:"Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients"
3180:
3162:
4631:"Gaussian measure in Hilbert space and applications in numerical analysis"
3642:
4506:"Bayesian Solution Uncertainty Quantification for Differential Equations"
4369:"Bayesian solution uncertainty quantification for differential equations"
3990:
Preconditioning for Scalable Gaussian Process Hyperparameter Optimization
1910:{\displaystyle \mathrm {d} x(t)=Ax(t)\,\mathrm {d} t+B\,\mathrm {d} v(t)}
23:
4926:
Micchelli, C. A.; Rivlin, T. J. (1977). "A survey of optimal recovery".
4367:
Chkrebtii, O.; Campbell, D. A.; Calderhead, B.; Girolami, M. A. (2016).
3719:"Dynamic Pruning of a Neural Network via Gradient Signal-to-Noise Ratio"
2085:
A number of probabilistic numerical methods have also been proposed for
4903:
4876:
4738:
3928:
3214:
2911:
2661:
and that the operations of integration and of point wise evaluation of
560:. Bayesian quadrature consists of specifying a prior distribution over
348:
Probabilistic numerical methods have been developed for the problem of
4532:
4505:
4395:
4368:
3895:"Sparse Cholesky Factorization by Kullback–Leibler Minimization"
3580:
3313:
3296:
3264:
3096:
2092:
839:
Probabilistic numerical methods have been developed in the context of
810:
throughout the optimization procedure; this often takes the form of a
4680:
4663:
4015:
4013:
3241:
Hennig, P. (2015). "Probabilistic interpretation of linear solvers".
2206:
is touched upon by a number of other areas of mathematics, including
1054:{\displaystyle \textstyle {\mathcal {D}}=\{(x_{n},y_{n})\}_{n=1}^{N}}
3769:
Proceedings of the 35th International Conference on Machine Learning
4893:
4574:
4451:
4334:
4264:
4213:
4131:
4034:
4002:
3911:
3878:
3817:
3777:
3702:
3681:
3571:
3446:
Schmidt, Jonathan; Krämer, Peter Nicholas; Hennig, Philipp (2021).
3357:
3145:
2998:
2905:
156:. In all these cases, the classic method is based on a regularized
4799:
Traub, J. F.; Wasilkowski, G. W.; Woźniakowski, H. (1988).
4522:
4385:
3401:
3255:
3194:
Cockayne, J.; Oates, C. J.; Sullivan, T. J.; Girolami, M. (2019).
3086:
1737:
Samples from the first component of the numerical solution of the
2908:: Adaptable Python toolbox for decision-making under uncertainty.
4020:
Wenger, J.; Pförtner, M.; Hennig, P.; Cunningham, J. P. (2022).
3450:. Advances in Neural Information Processing Systems (NeurIPS).
3420:. Advances in Neural Information Processing Systems (NeurIPS).
2752:
with mean equal to the trapezoidal rule and variance equal to
160:
that can be associated with the posterior mean arising from a
4022:
Posterior and Computational Uncertainty in Gaussian Processes
3893:
Schäfer, Florian; Katzfuss, Matthias; Owhadi, Houman (2021).
2277:
with random coefficients, and asked for "probable values" of
3717:
Siems J. N.; Klein A.; Archambeau C.; Mahsereci, M. (2021).
3307:(3). International Society for Bayesian Analysis: 937–1012.
989:
4707:(1956). "On interpolation theory and the theory of games".
4026:
Advances in Neural Information Processing Systems (NeurIPS)
3625:
3349:
Advances in Neural Information Processing Systems (NeurIPS)
1484:
of the linear system or the (pseudo-)inverse of the matrix
1391:
Illustration of a matrix-based probabilistic linear solver.
859:, pruning, and first- and second-order search directions.
237:
Bayesian quadrature with a Gaussian process conditioned on
4248:"Bayesian ODE solvers: The maximum a posteriori estimate"
4195:
Tronarp, F.; Kersting, H.; Särkkä, S.; Hennig, P (2019).
3643:"Probabilistic Line Searches for Stochastic Optimization"
4662:
Owhadi, Houman; Scovel, Clint; Schäfer, Florian (2019).
3848:
3846:
3723:
8th ICML Workshop on Automated Machine Learning (AutoML)
3338:
3336:
3334:
3332:
3129:"Probabilistic numerics and uncertainty in computations"
862:
In this setting, the optimization objective is often an
3418:
Probabilistic line searches for stochastic optimization
664:, then computing the implied posterior distribution on
62:
are seen as problems of statistical, probabilistic, or
4299:
Active Uncertainty Calibration in Bayesian ODE Solvers
3983:
3981:
3790:
3788:
3435:. International Conference on Machine Learning (ICML).
2759:
2692:
2435:
986:
873:
671:
478:
4563:
4561:
4559:
3286:
3284:
3282:
2758:
2734:
2691:
2667:
2643:
2567:
2547:
2527:
2477:
2434:
2389:
2340:
2312:
2283:
2243:
2149:
2123:
2102:
2068:. Inference can thus be implemented efficiently with
2050:
2021:
1992:
1972:
1952:
1923:
1838:
1754:
1673:
1634:
1588:
1552:
1523:
1490:
1470:
1450:
1430:
1401:
1360:
1328:
1283:
1254:
1234:
1214:
1208:. Epistemic uncertainty arises when the dataset size
1194:
1174:
1154:
1118:
1067:
985:
872:
796:
786:
Perhaps the most notable effort in this direction is
769:
723:
670:
650:
586:
566:
546:
526:
477:
431:
367:
318:
292:
243:
4178:
Bayesian solution of ordinary differential equations
3960:"Probabilistic Kernel-Matrix Determinant Estimation"
3859:"Probabilistic iterative methods for linear systems"
3461:
Diaconis, P. (1988). "Bayesian Numerical Analysis".
2954:
Hennig, P.; Osborne, M. A.; Kersting, H. P. (2022).
2266:{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }
87:
3994:
International Conference on Machine Learning (ICML)
3742:
Probabilistic Approaches to Stochastic Optimization
3666:"Coupling Adaptive Batch Sizes with Learning Rates"
2721:{\displaystyle \textstyle \int u(t)\,\mathrm {d} t}
2464:{\displaystyle \textstyle \int u(t)\,\mathrm {d} t}
1741:
obtained with a probabilistic numerical integrator.
2836:
2740:
2720:
2673:
2649:
2617:
2553:
2533:
2509:
2463:
2413:
2375:
2326:
2298:
2265:
2162:
2135:
2108:
2077:with a geometric structure such as symplecticity.
2056:
2036:
2007:
1978:
1958:
1938:
1909:
1808:
1698:
1659:
1620:
1574:
1538:
1509:
1476:
1456:
1436:
1416:
1376:
1343:
1301:
1269:
1240:
1220:
1200:
1180:
1160:
1140:
1104:
1053:
971:
802:
775:
759:Probabilistic numerics have also been studied for
729:
705:
656:
636:
572:
552:
532:
512:
463:
417:
336:
304:
278:
4609:Izv. Vysš. Učebn. Zaved. Matematika
3463:Statistical Decision Theory and Related Topics IV
3374:Classical quadrature rules via Gaussian processes
3345:Probabilistic Linear Solvers for Machine Learning
3127:Hennig, P.; Osborne, M. A.; Girolami, M. (2015).
3508:O'Hagan, A. (1991). "Bayes–Hermite quadrature".
847:, in particular to address main issues such as
3801:"Probabilistic linear solvers: a unifying view"
3236:
3234:
3232:
2233:. In modern terminology, Poincaré considered a
1621:{\displaystyle b^{\intercal }z=x^{\intercal }v}
361:In numerical integration, function evaluations
186:a likelihood in computations often considered "
1713:to large datasets. In particular, they enable
4624:
4622:
4049:
4047:
4045:
3510:Journal of Statistical Planning and Inference
8:
4823:: CS1 maint: multiple names: authors list (
4668:Notices of the American Mathematical Society
4489:: CS1 maint: multiple names: authors list (
4246:Tronarp, F.; Särkkä, S.; Hennig, P. (2021).
4161:: CS1 maint: multiple names: authors list (
4102:: CS1 maint: multiple names: authors list (
3063:
3061:
1112:that quantifies how well a predictive model
1030:
997:
706:{\displaystyle \textstyle \int f(x)\nu (dx)}
513:{\displaystyle \textstyle \int f(x)\nu (dx)}
4316:Schober, M.; Särkkä, S.; Hennig, P (2019).
3664:Balles, L.; Romero, J.; Hennig, H. (2017).
3493:: CS1 maint: DOI inactive as of May 2024 (
3389:Journal of Machine Learning Research (JMLR)
38:centering on the concept of uncertainty in
4827:) CS1 maint: numeric names: authors list (
3196:"Bayesian probabilistic numerical methods"
2844:. This viewpoint is very close to that of
2541:, given access to pointwise evaluation of
4902:
4892:
4859:
4783:
4679:
4646:
4573:
4531:
4521:
4450:
4394:
4384:
4343:
4333:
4273:
4263:
4222:
4212:
4130:
4079:
4033:
4001:
3910:
3877:
3826:
3816:
3776:
3701:
3680:
3570:
3400:
3356:
3312:
3254:
3170:
3144:
3095:
3085:
2997:
2827:
2811:
2798:
2785:
2774:
2760:
2757:
2733:
2709:
2708:
2690:
2666:
2642:
2591:
2572:
2566:
2546:
2526:
2503:
2502:
2476:
2452:
2451:
2433:
2388:
2367:
2351:
2339:
2320:
2319:
2311:
2282:
2259:
2258:
2251:
2250:
2242:
2154:
2148:
2122:
2101:
2049:
2020:
1991:
1971:
1951:
1922:
1890:
1889:
1875:
1874:
1839:
1837:
1756:
1755:
1753:
1684:
1672:
1645:
1633:
1609:
1593:
1587:
1563:
1551:
1522:
1501:
1489:
1469:
1449:
1429:
1400:
1369:
1361:
1359:
1327:
1282:
1253:
1233:
1213:
1193:
1173:
1153:
1123:
1117:
1084:
1066:
1044:
1033:
1020:
1007:
988:
987:
984:
956:
943:
930:
914:
903:
889:
871:
795:
768:
722:
669:
649:
637:{\displaystyle f(x_{1}),\ldots ,f(x_{n})}
625:
597:
585:
565:
545:
525:
476:
455:
436:
430:
418:{\displaystyle f(x_{1}),\ldots ,f(x_{n})}
406:
378:
366:
317:
291:
265:
242:
3630:. Cambridge: Cambridge University Press.
3416:Maren Mahsereci; Philipp Hennig (2015).
2510:{\displaystyle u\colon \to \mathbb {R} }
2091:
1732:
1386:
750:
644:to obtain a posterior distribution over
232:
74:A numerical method is an algorithm that
46:such as finding numerical solutions for
2946:
1966:-dimensional vector modeling the first
1809:{\displaystyle {\dot {y}}(t)=f(t,y(t))}
1105:{\displaystyle \ell (y,f_{\theta }(x))}
821:Bayesian sequential experimental design
713:. The most common choice of prior is a
279:{\displaystyle n=0,3,\ {\text{and}}\ 8}
4816:
4482:
4424:Artificial Intelligence and Statistics
4303:Uncertainty in Artificial Intelligence
4154:
4095:
3535:Rasmussen, C.; Ghahramani, Z. (2002).
3486:
3297:"A Bayesian conjugate gradient method"
3038:Owhadi, Houman; Scovel, Clint (2019).
2618:{\displaystyle t_{1},\dots ,t_{n}\in }
352:, with the most popular method called
42:. In probabilistic numerics, tasks in
26:field of study at the intersection of
4664:"Statistical Numerical Approximation"
3544:Neural Information Processing Systems
3372:Karvonen, Toni; Särkkä, Simo (2017).
2170:of the latent boundary value problem.
829:exploration vs. exploitation tradeoff
60:simulation and differential equations
7:
4596:(second ed.). Gauthier-Villars.
4182:Maximum Entropy and Bayesian Methods
3899:SIAM Journal on Scientific Computing
3866:Journal of Machine Learning Research
3647:Journal of Machine Learning Research
3351:. Vol. 33. pp. 6731–6742.
3074:Multiscale Modeling & Simulation
3033:
3031:
3029:
3027:
3025:
1745:Probabilistic numerical methods for
3070:"Bayesian Numerical Homogenization"
2896:: Probabilistic Numerics in Python.
464:{\displaystyle x_{1},\ldots ,x_{n}}
79:
3641:Mahsereci, M.; Hennig, P. (2017).
2710:
2453:
2176:methods of mean weighted residuals
1891:
1876:
1840:
1284:
1168:performs on predicting the target
471:are used to estimate the integral
94:of a multivariate function). In a
14:
4439:Comput. Methods Appl. Mech. Engrg
4297:Kersting, H.; Hennig, P. (2016).
3605:ParBayesianOptimization R package
2327:{\displaystyle n\in \mathbb {N} }
1575:{\displaystyle z=A^{\intercal }v}
1302:{\displaystyle \nabla L(\theta )}
110:and realised in the framework of
91:
4418:Hennig, P.; Hauberg, S. (2014).
3744:(Thesis). Universität Tübingen.
1831:stochastic differential equation
83:
3763:Balles, L.; Hennig, H. (2018).
3343:Wenger, J.; Hennig, P. (2020).
2117:The plots juxtapose the belief
1747:ordinary differential equations
1729:Ordinary differential equations
580:and conditioning this prior on
3857:; Oates, C.; Reid, T. (2021).
2963:. Cambridge University Press.
2824:
2791:
2705:
2699:
2685:. Thus, the definite integral
2612:
2600:
2499:
2496:
2484:
2448:
2442:
2376:{\displaystyle f(a_{i})=B_{i}}
2357:
2344:
2293:
2287:
2255:
2087:partial differential equations
2081:Partial differential equations
2031:
2025:
2002:
1996:
1933:
1927:
1904:
1898:
1871:
1865:
1853:
1847:
1803:
1800:
1794:
1782:
1773:
1767:
1510:{\displaystyle H=A^{\dagger }}
1405:
1370:
1362:
1296:
1290:
1264:
1258:
1141:{\displaystyle f_{\theta }(x)}
1135:
1129:
1099:
1096:
1090:
1071:
1026:
1000:
965:
962:
949:
923:
883:
877:
699:
690:
684:
678:
631:
618:
603:
590:
506:
497:
491:
485:
412:
399:
384:
371:
1:
3602:Wilson, Samuel (2019-11-22),
2136:{\displaystyle u\mid \cdots }
1248:) such as the loss function
1188:from its corresponding input
4861:10.1016/0885-064X(87)90014-8
4801:Information-Based Complexity
3964:Probabilistic Linear Algebra
3522:10.1016/0378-3758(91)90002-V
3471:10.1007/978-1-4613-8768-8_20
3243:SIAM Journal on Optimization
2930:Information-based complexity
2863:information-based complexity
2414:{\displaystyle i=1,\dots ,n}
2212:information-based complexity
1723:finite amount of computation
4936:10.1007/978-1-4684-2388-4_1
3795:Bartels, S.; Cockayne, J.;
2235:Gaussian prior distribution
1417:{\displaystyle v\mapsto Av}
1320:systems of linear equations
4992:
4842:Packel, Edward W. (1987).
4275:10.1007/s11222-021-09993-7
4224:10.1007/s11222-019-09900-1
4141:10.1007/s11222-020-09926-w
3972:10.15496/publikation-56119
3828:10.1007/s11222-019-09897-7
3811:(6). Springer: 1249–1263.
3750:10.15496/publikation-26116
3008:10.1007/s11222-019-09902-z
2935:Uncertainty quantification
2877:, evidently influenced by
2194:History and related fields
2163:{\displaystyle u^{\star }}
1270:{\displaystyle L(\theta )}
744:
226:
80:linear system of equations
4469:10.1016/j.cma.2021.113961
4345:10.1007/s11222-017-9798-7
4072:10.1007/s11222-016-9671-0
3736:Mahsereci, Maren (2018).
3291:Cockayne, J.; Oates, C.;
1699:{\displaystyle x=A^{-1}b}
1660:{\displaystyle v=A^{-1}y}
855:, batch-size selection,
761:mathematical optimization
4648:10.1216/RMJ-1972-2-3-379
4322:Statistics and Computing
4252:Statistics and Computing
4201:Statistics and Computing
3805:Statistics and Computing
2273:, expressed as a formal
114:(often, but not always,
4785:10.1214/aoms/1177697089
4709:MTA Mat. Kat. Int. Kozl
4594:Calcul des Probabilités
3958:Bartels, Simon (2020).
3624:Garnett, Roman (2021).
3473:(inactive 2024-05-13).
3295:; Girolami, M. (2019).
3068:Owhadi, Houman (2015).
2231:Calcul des Probabilités
2143:with the true solution
1444:with different vectors
1424:with the system matrix
1351:and the computation of
1277:itself or its gradient
1241:{\displaystyle \theta }
1161:{\displaystyle \theta }
841:stochastic optimization
195:both inputs and outputs
168:for the squared error.
112:probabilistic inference
4762:Kimeldorf, George S.;
4635:Rocky Mountain J. Math
4629:Larkin, F. M. (1972).
3537:"Bayesian Monte Carlo"
3163:10.1098/rspa.2015.0142
3139:(2179): 20150142, 17.
2957:Probabilistic Numerics
2838:
2790:
2742:
2722:
2675:
2651:
2619:
2555:
2535:
2511:
2465:
2415:
2377:
2328:
2300:
2267:
2210:of numerical methods,
2198:The interplay between
2184:finite element methods
2171:
2164:
2137:
2110:
2058:
2038:
2009:
1980:
1960:
1940:
1911:
1810:
1742:
1700:
1661:
1622:
1576:
1540:
1511:
1478:
1458:
1438:
1418:
1392:
1378:
1345:
1303:
1271:
1242:
1222:
1202:
1182:
1162:
1142:
1106:
1055:
973:
919:
804:
777:
756:
731:
707:
658:
638:
574:
554:
534:
514:
465:
425:at a number of points
419:
345:
338:
306:
280:
158:least-squares estimate
131:posterior distribution
20:Probabilistic numerics
4176:Skilling, J. (1992).
3799:; Hennig, P. (2019).
3627:Bayesian Optimization
2925:Average-case analysis
2839:
2770:
2743:
2723:
2676:
2652:
2635:average-case analysis
2620:
2556:
2536:
2512:
2466:
2416:
2378:
2329:
2306:given this prior and
2301:
2268:
2208:average-case analysis
2165:
2138:
2111:
2095:
2059:
2039:
2010:
1981:
1961:
1941:
1912:
1811:
1736:
1719:finite number of data
1701:
1662:
1623:
1577:
1541:
1512:
1479:
1459:
1439:
1419:
1390:
1379:
1346:
1304:
1272:
1243:
1223:
1203:
1183:
1163:
1143:
1107:
1056:
979:defined by a dataset
974:
899:
805:
788:Bayesian optimization
778:
754:
747:Bayesian optimization
732:
708:
659:
639:
575:
555:
540:against some measure
535:
515:
466:
420:
350:numerical integration
339:
337:{\displaystyle n=3,8}
307:
281:
236:
203:inside the inner loop
88:differential equation
4731:Linear Approximation
2756:
2732:
2689:
2665:
2641:
2565:
2545:
2525:
2475:
2432:
2387:
2338:
2310:
2299:{\displaystyle f(x)}
2281:
2241:
2147:
2121:
2100:
2057:{\displaystyle \nu }
2048:
2037:{\displaystyle v(t)}
2019:
2008:{\displaystyle y(t)}
1990:
1979:{\displaystyle \nu }
1970:
1959:{\displaystyle \nu }
1950:
1939:{\displaystyle x(t)}
1921:
1836:
1752:
1671:
1632:
1586:
1550:
1539:{\displaystyle y=Av}
1521:
1488:
1468:
1448:
1428:
1399:
1358:
1344:{\displaystyle Ax=b}
1326:
1281:
1252:
1232:
1212:
1192:
1172:
1152:
1116:
1065:
983:
870:
817:acquisition function
794:
767:
721:
668:
648:
584:
564:
553:{\displaystyle \nu }
544:
524:
475:
429:
365:
316:
290:
241:
154:quasi-Newton methods
86:, the solution of a
4976:Applied mathematics
4875:Owhadi, H. (2017).
4461:2021CMAME.384k3961A
4426:. pp. 347–355.
4305:. pp. 309–318.
3921:2021SJSC...43A2019S
3559:Statistical Science
3155:2015RSPSA.47150142H
2846:Bayesian quadrature
2750:normal distribution
1377:{\displaystyle |A|}
1049:
355:Bayesian quadrature
305:{\displaystyle n=0}
229:Bayesian quadrature
166:worst-case estimate
150:Gaussian quadrature
142:conjugate gradients
127:likelihood function
28:applied mathematics
4966:Applied statistics
4904:10.1137/15M1013894
4772:Ann. Math. Statist
3929:10.1137/20M1336254
3905:(3): A2019–A2046.
3215:10.1137/17M1139357
2834:
2833:
2738:
2718:
2717:
2671:
2647:
2627:mean squared error
2615:
2551:
2531:
2507:
2461:
2460:
2411:
2373:
2324:
2296:
2263:
2218:, and statistical
2200:numerical analysis
2172:
2160:
2133:
2106:
2054:
2034:
2005:
1976:
1956:
1936:
1907:
1806:
1743:
1711:Gaussian processes
1696:
1657:
1618:
1572:
1536:
1507:
1474:
1454:
1434:
1414:
1393:
1374:
1341:
1299:
1267:
1238:
1218:
1198:
1178:
1158:
1138:
1102:
1051:
1050:
1029:
969:
968:
835:Local optimization
800:
773:
757:
727:
703:
702:
654:
634:
570:
550:
530:
510:
509:
461:
415:
346:
334:
312:) and posterior (
302:
276:
129:, and returning a
123:prior distribution
116:Bayesian inference
82:, the value of an
64:Bayesian inference
44:numerical analysis
4945:978-1-4684-2390-7
4930:. pp. 1–54.
4674:(10): 1608–1617.
4533:10.1214/16-BA1017
4510:Bayesian Analysis
4396:10.1214/16-BA1017
4373:Bayesian Analysis
4184:. pp. 23–37.
3581:10.1214/18-STS660
3480:978-1-4613-8770-1
3314:10.1214/19-BA1145
3301:Bayesian Analysis
3265:10.1137/140955501
3097:10.1137/140974596
3051:978-1-108-48436-7
2768:
2741:{\displaystyle u}
2674:{\displaystyle u}
2650:{\displaystyle u}
2554:{\displaystyle u}
2534:{\displaystyle u}
2109:{\displaystyle u}
1764:
1721:observed and the
1477:{\displaystyle x}
1457:{\displaystyle v}
1437:{\displaystyle A}
1221:{\displaystyle N}
1201:{\displaystyle x}
1181:{\displaystyle y}
1148:parameterized by
897:
803:{\displaystyle f}
776:{\displaystyle f}
730:{\displaystyle f}
657:{\displaystyle f}
573:{\displaystyle f}
533:{\displaystyle f}
272:
268:
264:
146:Nordsieck methods
4983:
4971:Machine learning
4950:
4949:
4923:
4917:
4916:
4906:
4896:
4872:
4866:
4865:
4863:
4839:
4833:
4832:
4822:
4814:
4796:
4790:
4789:
4787:
4759:
4753:
4752:
4739:10.1090/surv/009
4723:
4717:
4716:
4700:
4694:
4693:
4683:
4681:10.1090/noti1963
4659:
4653:
4652:
4650:
4626:
4617:
4616:
4604:
4598:
4597:
4586:
4580:
4579:
4577:
4565:
4554:
4553:
4535:
4525:
4516:(4): 1239–1267.
4501:
4495:
4494:
4488:
4480:
4454:
4434:
4428:
4427:
4415:
4409:
4408:
4398:
4388:
4379:(4): 1239–1267.
4364:
4358:
4357:
4347:
4337:
4313:
4307:
4306:
4294:
4288:
4287:
4277:
4267:
4243:
4237:
4236:
4226:
4216:
4207:(6): 1297–1315.
4192:
4186:
4185:
4173:
4167:
4166:
4160:
4152:
4134:
4114:
4108:
4107:
4101:
4093:
4083:
4066:(4): 1065–1082.
4051:
4040:
4039:
4037:
4017:
4008:
4007:
4005:
3985:
3976:
3975:
3955:
3949:
3948:
3914:
3890:
3884:
3883:
3881:
3863:
3850:
3841:
3840:
3830:
3820:
3792:
3783:
3782:
3780:
3760:
3754:
3753:
3733:
3727:
3726:
3714:
3708:
3707:
3705:
3693:
3687:
3686:
3684:
3670:
3661:
3655:
3654:
3638:
3632:
3631:
3621:
3615:
3614:
3613:
3612:
3599:
3593:
3592:
3574:
3554:
3548:
3547:
3541:
3532:
3526:
3525:
3505:
3499:
3498:
3492:
3484:
3458:
3452:
3451:
3443:
3437:
3436:
3428:
3422:
3421:
3413:
3407:
3406:
3404:
3384:
3378:
3377:
3369:
3363:
3362:
3360:
3340:
3327:
3326:
3316:
3288:
3277:
3276:
3258:
3238:
3227:
3226:
3200:
3191:
3185:
3184:
3174:
3148:
3124:
3118:
3117:
3099:
3089:
3065:
3056:
3055:
3035:
3020:
3019:
3001:
2992:(6): 1335–1351.
2981:
2975:
2974:
2962:
2951:
2900:ProbNumDiffEq.jl
2858:Gaussian process
2843:
2841:
2840:
2835:
2832:
2831:
2822:
2821:
2803:
2802:
2789:
2784:
2769:
2761:
2747:
2745:
2744:
2739:
2727:
2725:
2724:
2719:
2713:
2680:
2678:
2677:
2672:
2659:Gaussian measure
2656:
2654:
2653:
2648:
2631:trapezoidal rule
2624:
2622:
2621:
2616:
2596:
2595:
2577:
2576:
2560:
2558:
2557:
2552:
2540:
2538:
2537:
2532:
2516:
2514:
2513:
2508:
2506:
2470:
2468:
2467:
2462:
2456:
2420:
2418:
2417:
2412:
2382:
2380:
2379:
2374:
2372:
2371:
2356:
2355:
2333:
2331:
2330:
2325:
2323:
2305:
2303:
2302:
2297:
2272:
2270:
2269:
2264:
2262:
2254:
2188:spectral methods
2180:Galerkin methods
2178:, which include
2169:
2167:
2166:
2161:
2159:
2158:
2142:
2140:
2139:
2134:
2115:
2113:
2112:
2107:
2070:Kalman filtering
2063:
2061:
2060:
2055:
2043:
2041:
2040:
2035:
2014:
2012:
2011:
2006:
1985:
1983:
1982:
1977:
1965:
1963:
1962:
1957:
1945:
1943:
1942:
1937:
1916:
1914:
1913:
1908:
1894:
1879:
1843:
1827:Gaussian process
1815:
1813:
1812:
1807:
1766:
1765:
1757:
1705:
1703:
1702:
1697:
1692:
1691:
1666:
1664:
1663:
1658:
1653:
1652:
1627:
1625:
1624:
1619:
1614:
1613:
1598:
1597:
1581:
1579:
1578:
1573:
1568:
1567:
1545:
1543:
1542:
1537:
1516:
1514:
1513:
1508:
1506:
1505:
1483:
1481:
1480:
1475:
1463:
1461:
1460:
1455:
1443:
1441:
1440:
1435:
1423:
1421:
1420:
1415:
1383:
1381:
1380:
1375:
1373:
1365:
1350:
1348:
1347:
1342:
1308:
1306:
1305:
1300:
1276:
1274:
1273:
1268:
1247:
1245:
1244:
1239:
1227:
1225:
1224:
1219:
1207:
1205:
1204:
1199:
1187:
1185:
1184:
1179:
1167:
1165:
1164:
1159:
1147:
1145:
1144:
1139:
1128:
1127:
1111:
1109:
1108:
1103:
1089:
1088:
1060:
1058:
1057:
1052:
1048:
1043:
1025:
1024:
1012:
1011:
993:
992:
978:
976:
975:
970:
961:
960:
948:
947:
935:
934:
918:
913:
898:
890:
825:utility function
812:Gaussian process
809:
807:
806:
801:
782:
780:
779:
774:
736:
734:
733:
728:
715:Gaussian process
712:
710:
709:
704:
663:
661:
660:
655:
643:
641:
640:
635:
630:
629:
602:
601:
579:
577:
576:
571:
559:
557:
556:
551:
539:
537:
536:
531:
519:
517:
516:
511:
470:
468:
467:
462:
460:
459:
441:
440:
424:
422:
421:
416:
411:
410:
383:
382:
343:
341:
340:
335:
311:
309:
308:
303:
285:
283:
282:
277:
270:
269:
266:
262:
207:inverse problems
36:machine learning
4991:
4990:
4986:
4985:
4984:
4982:
4981:
4980:
4956:
4955:
4954:
4953:
4946:
4925:
4924:
4920:
4874:
4873:
4869:
4841:
4840:
4836:
4815:
4811:
4798:
4797:
4793:
4761:
4760:
4756:
4749:
4725:
4724:
4720:
4702:
4701:
4697:
4661:
4660:
4656:
4628:
4627:
4620:
4606:
4605:
4601:
4590:Poincaré, Henri
4588:
4587:
4583:
4567:
4566:
4557:
4503:
4502:
4498:
4481:
4436:
4435:
4431:
4417:
4416:
4412:
4366:
4365:
4361:
4315:
4314:
4310:
4296:
4295:
4291:
4245:
4244:
4240:
4194:
4193:
4189:
4175:
4174:
4170:
4153:
4116:
4115:
4111:
4094:
4053:
4052:
4043:
4019:
4018:
4011:
3987:
3986:
3979:
3957:
3956:
3952:
3892:
3891:
3887:
3861:
3852:
3851:
3844:
3794:
3793:
3786:
3762:
3761:
3757:
3735:
3734:
3730:
3716:
3715:
3711:
3695:
3694:
3690:
3668:
3663:
3662:
3658:
3640:
3639:
3635:
3623:
3622:
3618:
3610:
3608:
3601:
3600:
3596:
3556:
3555:
3551:
3539:
3534:
3533:
3529:
3507:
3506:
3502:
3485:
3481:
3460:
3459:
3455:
3445:
3444:
3440:
3430:
3429:
3425:
3415:
3414:
3410:
3386:
3385:
3381:
3371:
3370:
3366:
3342:
3341:
3330:
3290:
3289:
3280:
3249:(1): 2347–260.
3240:
3239:
3230:
3198:
3193:
3192:
3188:
3126:
3125:
3121:
3067:
3066:
3059:
3052:
3037:
3036:
3023:
2983:
2982:
2978:
2971:
2960:
2953:
2952:
2948:
2943:
2921:
2890:
2882:theory of games
2875:decision theory
2823:
2807:
2794:
2754:
2753:
2748:, it follows a
2730:
2729:
2687:
2686:
2663:
2662:
2639:
2638:
2587:
2568:
2563:
2562:
2543:
2542:
2523:
2522:
2519:Brownian motion
2473:
2472:
2430:
2429:
2385:
2384:
2363:
2347:
2336:
2335:
2308:
2307:
2279:
2278:
2239:
2238:
2220:decision theory
2196:
2150:
2145:
2144:
2119:
2118:
2098:
2097:
2083:
2066:Brownian motion
2046:
2045:
2017:
2016:
1988:
1987:
1986:derivatives of
1968:
1967:
1948:
1947:
1919:
1918:
1834:
1833:
1750:
1749:
1731:
1680:
1669:
1668:
1641:
1630:
1629:
1605:
1589:
1584:
1583:
1559:
1548:
1547:
1519:
1518:
1497:
1486:
1485:
1466:
1465:
1446:
1445:
1426:
1425:
1397:
1396:
1356:
1355:
1324:
1323:
1316:
1279:
1278:
1250:
1249:
1230:
1229:
1210:
1209:
1190:
1189:
1170:
1169:
1150:
1149:
1119:
1114:
1113:
1080:
1063:
1062:
1016:
1003:
981:
980:
952:
939:
926:
868:
867:
837:
792:
791:
765:
764:
749:
743:
719:
718:
666:
665:
646:
645:
621:
593:
582:
581:
562:
561:
542:
541:
522:
521:
473:
472:
451:
432:
427:
426:
402:
374:
363:
362:
314:
313:
288:
287:
239:
238:
231:
225:
220:
218:Numerical tasks
188:likelihood-free
135:active learning
72:
17:
12:
11:
5:
4989:
4987:
4979:
4978:
4973:
4968:
4958:
4957:
4952:
4951:
4944:
4918:
4867:
4854:(3): 244–257.
4834:
4809:
4791:
4778:(2): 495–502.
4754:
4747:
4718:
4695:
4654:
4641:(3): 379–421.
4618:
4615:(13): 145–158.
4599:
4581:
4555:
4496:
4429:
4410:
4359:
4308:
4289:
4238:
4187:
4168:
4125:(4): 907–932.
4109:
4041:
4009:
3977:
3950:
3885:
3853:Cockayne, J.;
3842:
3784:
3755:
3728:
3709:
3688:
3656:
3633:
3616:
3594:
3549:
3527:
3516:(3): 245–260.
3500:
3479:
3453:
3438:
3423:
3408:
3395:(1): 843–865.
3379:
3364:
3328:
3278:
3228:
3209:(4): 756–789.
3186:
3119:
3080:(3): 812–828.
3057:
3050:
3021:
2976:
2970:978-1107163447
2969:
2945:
2944:
2942:
2939:
2938:
2937:
2932:
2927:
2920:
2917:
2916:
2915:
2909:
2903:
2897:
2889:
2886:
2867:mixed strategy
2830:
2826:
2820:
2817:
2814:
2810:
2806:
2801:
2797:
2793:
2788:
2783:
2780:
2777:
2773:
2767:
2764:
2737:
2716:
2712:
2707:
2704:
2701:
2698:
2695:
2670:
2646:
2614:
2611:
2608:
2605:
2602:
2599:
2594:
2590:
2586:
2583:
2580:
2575:
2571:
2550:
2530:
2505:
2501:
2498:
2495:
2492:
2489:
2486:
2483:
2480:
2471:of a function
2459:
2455:
2450:
2447:
2444:
2441:
2438:
2410:
2407:
2404:
2401:
2398:
2395:
2392:
2370:
2366:
2362:
2359:
2354:
2350:
2346:
2343:
2322:
2318:
2315:
2295:
2292:
2289:
2286:
2261:
2257:
2253:
2249:
2246:
2237:on a function
2227:Henri Poincaré
2195:
2192:
2157:
2153:
2132:
2129:
2126:
2105:
2082:
2079:
2074:
2073:
2072:based methods.
2053:
2033:
2030:
2027:
2024:
2004:
2001:
1998:
1995:
1975:
1955:
1935:
1932:
1929:
1926:
1906:
1903:
1900:
1897:
1893:
1888:
1885:
1882:
1878:
1873:
1870:
1867:
1864:
1861:
1858:
1855:
1852:
1849:
1846:
1842:
1822:
1805:
1802:
1799:
1796:
1793:
1790:
1787:
1784:
1781:
1778:
1775:
1772:
1769:
1763:
1760:
1730:
1727:
1695:
1690:
1687:
1683:
1679:
1676:
1656:
1651:
1648:
1644:
1640:
1637:
1617:
1612:
1608:
1604:
1601:
1596:
1592:
1571:
1566:
1562:
1558:
1555:
1535:
1532:
1529:
1526:
1504:
1500:
1496:
1493:
1473:
1453:
1433:
1413:
1410:
1407:
1404:
1372:
1368:
1364:
1340:
1337:
1334:
1331:
1315:
1314:Linear algebra
1312:
1298:
1295:
1292:
1289:
1286:
1266:
1263:
1260:
1257:
1237:
1217:
1197:
1177:
1157:
1137:
1134:
1131:
1126:
1122:
1101:
1098:
1095:
1092:
1087:
1083:
1079:
1076:
1073:
1070:
1047:
1042:
1039:
1036:
1032:
1028:
1023:
1019:
1015:
1010:
1006:
1002:
999:
996:
991:
967:
964:
959:
955:
951:
946:
942:
938:
933:
929:
925:
922:
917:
912:
909:
906:
902:
896:
893:
888:
885:
882:
879:
876:
864:empirical risk
857:early stopping
836:
833:
799:
772:
742:
739:
726:
701:
698:
695:
692:
689:
686:
683:
680:
677:
674:
653:
633:
628:
624:
620:
617:
614:
611:
608:
605:
600:
596:
592:
589:
569:
549:
529:
520:of a function
508:
505:
502:
499:
496:
493:
490:
487:
484:
481:
458:
454:
450:
447:
444:
439:
435:
414:
409:
405:
401:
398:
395:
392:
389:
386:
381:
377:
373:
370:
333:
330:
327:
324:
321:
301:
298:
295:
275:
261:
258:
255:
252:
249:
246:
227:Main article:
224:
221:
219:
216:
211:
210:
198:
191:
183:
180:
71:
68:
52:linear algebra
15:
13:
10:
9:
6:
4:
3:
2:
4988:
4977:
4974:
4972:
4969:
4967:
4964:
4963:
4961:
4947:
4941:
4937:
4933:
4929:
4922:
4919:
4914:
4910:
4905:
4900:
4895:
4890:
4887:(1): 99–149.
4886:
4882:
4878:
4871:
4868:
4862:
4857:
4853:
4849:
4848:J. Complexity
4845:
4838:
4835:
4830:
4826:
4820:
4812:
4810:0-12-697545-0
4806:
4802:
4795:
4792:
4786:
4781:
4777:
4773:
4769:
4765:
4758:
4755:
4750:
4748:9780821815090
4744:
4740:
4736:
4732:
4728:
4722:
4719:
4714:
4710:
4706:
4703:Palasti, I.;
4699:
4696:
4691:
4687:
4682:
4677:
4673:
4669:
4665:
4658:
4655:
4649:
4644:
4640:
4636:
4632:
4625:
4623:
4619:
4614:
4610:
4603:
4600:
4595:
4591:
4585:
4582:
4576:
4571:
4564:
4562:
4560:
4556:
4551:
4547:
4543:
4539:
4534:
4529:
4524:
4519:
4515:
4511:
4507:
4500:
4497:
4492:
4486:
4478:
4474:
4470:
4466:
4462:
4458:
4453:
4448:
4444:
4440:
4433:
4430:
4425:
4421:
4414:
4411:
4406:
4402:
4397:
4392:
4387:
4382:
4378:
4374:
4370:
4363:
4360:
4355:
4351:
4346:
4341:
4336:
4331:
4328:(1): 99–122.
4327:
4323:
4319:
4312:
4309:
4304:
4300:
4293:
4290:
4285:
4281:
4276:
4271:
4266:
4261:
4257:
4253:
4249:
4242:
4239:
4234:
4230:
4225:
4220:
4215:
4210:
4206:
4202:
4198:
4191:
4188:
4183:
4179:
4172:
4169:
4164:
4158:
4150:
4146:
4142:
4138:
4133:
4128:
4124:
4120:
4113:
4110:
4105:
4099:
4091:
4087:
4082:
4077:
4073:
4069:
4065:
4061:
4057:
4050:
4048:
4046:
4042:
4036:
4031:
4027:
4023:
4016:
4014:
4010:
4004:
3999:
3995:
3991:
3984:
3982:
3978:
3973:
3969:
3965:
3961:
3954:
3951:
3946:
3942:
3938:
3934:
3930:
3926:
3922:
3918:
3913:
3908:
3904:
3900:
3896:
3889:
3886:
3880:
3875:
3872:(232): 1–34.
3871:
3867:
3860:
3856:
3849:
3847:
3843:
3838:
3834:
3829:
3824:
3819:
3814:
3810:
3806:
3802:
3798:
3791:
3789:
3785:
3779:
3774:
3770:
3766:
3759:
3756:
3751:
3747:
3743:
3739:
3732:
3729:
3724:
3720:
3713:
3710:
3704:
3699:
3692:
3689:
3683:
3678:
3674:
3667:
3660:
3657:
3652:
3648:
3644:
3637:
3634:
3629:
3628:
3620:
3617:
3607:
3606:
3598:
3595:
3590:
3586:
3582:
3578:
3573:
3568:
3564:
3560:
3553:
3550:
3545:
3538:
3531:
3528:
3523:
3519:
3515:
3511:
3504:
3501:
3496:
3490:
3482:
3476:
3472:
3468:
3464:
3457:
3454:
3449:
3442:
3439:
3434:
3427:
3424:
3419:
3412:
3409:
3403:
3398:
3394:
3390:
3383:
3380:
3375:
3368:
3365:
3359:
3354:
3350:
3346:
3339:
3337:
3335:
3333:
3329:
3324:
3320:
3315:
3310:
3306:
3302:
3298:
3294:
3287:
3285:
3283:
3279:
3274:
3270:
3266:
3262:
3257:
3252:
3248:
3244:
3237:
3235:
3233:
3229:
3224:
3220:
3216:
3212:
3208:
3204:
3197:
3190:
3187:
3182:
3178:
3173:
3168:
3164:
3160:
3156:
3152:
3147:
3142:
3138:
3134:
3130:
3123:
3120:
3115:
3111:
3107:
3103:
3098:
3093:
3088:
3083:
3079:
3075:
3071:
3064:
3062:
3058:
3053:
3047:
3043:
3042:
3034:
3032:
3030:
3028:
3026:
3022:
3017:
3013:
3009:
3005:
3000:
2995:
2991:
2987:
2980:
2977:
2972:
2966:
2959:
2958:
2950:
2947:
2940:
2936:
2933:
2931:
2928:
2926:
2923:
2922:
2918:
2913:
2910:
2907:
2904:
2901:
2898:
2895:
2892:
2891:
2887:
2885:
2883:
2880:
2879:von Neumann's
2876:
2873:
2868:
2864:
2859:
2854:
2853:Houman Owhadi
2849:
2847:
2828:
2818:
2815:
2812:
2808:
2804:
2799:
2795:
2786:
2781:
2778:
2775:
2771:
2765:
2762:
2751:
2735:
2714:
2702:
2696:
2693:
2684:
2668:
2660:
2644:
2636:
2632:
2628:
2609:
2606:
2603:
2597:
2592:
2588:
2584:
2581:
2578:
2573:
2569:
2548:
2528:
2520:
2493:
2490:
2487:
2481:
2478:
2457:
2445:
2439:
2436:
2427:
2422:
2408:
2405:
2402:
2399:
2396:
2393:
2390:
2368:
2364:
2360:
2352:
2348:
2341:
2334:observations
2316:
2313:
2290:
2284:
2276:
2247:
2244:
2236:
2232:
2228:
2223:
2221:
2217:
2213:
2209:
2205:
2201:
2193:
2191:
2189:
2186:, as well as
2185:
2181:
2177:
2155:
2151:
2130:
2127:
2124:
2103:
2094:
2090:
2088:
2080:
2078:
2071:
2067:
2064:-dimensional
2051:
2028:
2022:
1999:
1993:
1973:
1953:
1930:
1924:
1901:
1895:
1886:
1883:
1880:
1868:
1862:
1859:
1856:
1850:
1844:
1832:
1828:
1823:
1819:
1818:
1817:
1797:
1791:
1788:
1785:
1779:
1776:
1770:
1761:
1758:
1748:
1740:
1739:Lorenz system
1735:
1728:
1726:
1724:
1720:
1716:
1712:
1707:
1693:
1688:
1685:
1681:
1677:
1674:
1654:
1649:
1646:
1642:
1638:
1635:
1615:
1610:
1606:
1602:
1599:
1594:
1590:
1569:
1564:
1560:
1556:
1553:
1533:
1530:
1527:
1524:
1502:
1498:
1494:
1491:
1471:
1451:
1431:
1411:
1408:
1402:
1389:
1385:
1366:
1354:
1338:
1335:
1332:
1329:
1321:
1313:
1311:
1293:
1287:
1261:
1255:
1235:
1215:
1195:
1175:
1155:
1132:
1124:
1120:
1093:
1085:
1081:
1077:
1074:
1068:
1061:, and a loss
1045:
1040:
1037:
1034:
1021:
1017:
1013:
1008:
1004:
994:
957:
953:
944:
940:
936:
931:
927:
920:
915:
910:
907:
904:
900:
894:
891:
886:
880:
874:
865:
860:
858:
854:
853:line searches
850:
849:learning rate
846:
845:deep learning
842:
834:
832:
830:
826:
822:
818:
813:
797:
789:
784:
770:
762:
753:
748:
740:
738:
724:
716:
696:
693:
687:
681:
675:
672:
651:
626:
622:
615:
612:
609:
606:
598:
594:
587:
567:
547:
527:
503:
500:
494:
488:
482:
479:
456:
452:
448:
445:
442:
437:
433:
407:
403:
396:
393:
390:
387:
379:
375:
368:
359:
357:
356:
351:
331:
328:
325:
322:
319:
299:
296:
293:
273:
259:
256:
253:
250:
247:
244:
235:
230:
222:
217:
215:
208:
204:
199:
196:
192:
189:
184:
181:
178:
174:
173:
172:
169:
167:
163:
159:
155:
151:
147:
143:
138:
136:
132:
128:
124:
119:
117:
113:
109:
105:
101:
97:
96:probabilistic
93:
89:
85:
81:
77:
69:
67:
65:
61:
57:
53:
49:
45:
41:
37:
33:
29:
25:
21:
4927:
4921:
4884:
4880:
4870:
4851:
4847:
4837:
4800:
4794:
4775:
4771:
4764:Wahba, Grace
4757:
4730:
4721:
4712:
4708:
4698:
4671:
4667:
4657:
4638:
4634:
4612:
4608:
4602:
4593:
4584:
4513:
4509:
4499:
4485:cite journal
4442:
4438:
4432:
4423:
4419:
4413:
4376:
4372:
4362:
4325:
4321:
4311:
4302:
4298:
4292:
4255:
4251:
4241:
4204:
4200:
4190:
4181:
4177:
4171:
4157:cite journal
4122:
4119:Stat. Comput
4118:
4112:
4098:cite journal
4063:
4060:Stat. Comput
4059:
4025:
4021:
3993:
3989:
3963:
3953:
3902:
3898:
3888:
3869:
3865:
3808:
3804:
3768:
3758:
3741:
3731:
3722:
3712:
3691:
3672:
3659:
3653:(119): 1–59.
3650:
3646:
3636:
3626:
3619:
3609:, retrieved
3604:
3597:
3562:
3558:
3552:
3543:
3530:
3513:
3509:
3503:
3489:cite journal
3462:
3456:
3447:
3441:
3432:
3426:
3417:
3411:
3392:
3388:
3382:
3373:
3367:
3348:
3344:
3304:
3300:
3246:
3242:
3206:
3202:
3189:
3136:
3132:
3122:
3077:
3073:
3040:
2989:
2986:Stat. Comput
2985:
2979:
2956:
2949:
2851:As noted by
2850:
2423:
2275:power series
2230:
2224:
2197:
2173:
2084:
2075:
2015:, and where
1744:
1722:
1718:
1714:
1708:
1394:
1353:determinants
1322:of the form
1317:
866:of the form
861:
838:
816:
785:
758:
741:Optimization
360:
353:
347:
212:
202:
194:
176:
175:They return
170:
139:
120:
107:
103:
99:
95:
76:approximates
75:
73:
70:Introduction
56:optimization
19:
18:
4881:SIAM Review
4258:(3): 1–18.
3771:: 404–413.
3565:(1): 1–22.
3465:: 163–175.
3203:SIAM Review
2683:linear maps
2216:game theory
2204:probability
851:tuning and
223:Integration
190:" elsewhere
152:rules, and
48:integration
40:computation
4960:Categories
4894:1503.03467
4715:: 529–540.
4575:2212.12474
4452:2103.06204
4445:: 113961.
4335:1610.05261
4265:2004.00623
4214:1810.03440
4132:1801.01340
4035:2205.15449
4003:2107.00243
3966:(Thesis).
3912:2004.14455
3879:2012.12615
3818:1810.03398
3778:1705.07774
3703:1703.09580
3682:1612.05086
3611:2019-12-12
3572:1512.00933
3546:: 489–496.
3358:2010.09691
3146:1506.01326
2999:1901.04457
2941:References
2517:, under a
2426:quadrature
1725:expended.
745:See also:
177:structured
100:estimation
32:statistics
4819:cite book
4690:204830421
4542:1936-0975
4523:1306.2365
4477:232170649
4386:1306.2365
4284:214774980
3945:216914317
3937:1064-8275
3855:Ipsen, I.
3797:Ipsen, I.
3402:1206.4602
3293:Ipsen, I.
3256:1402.2058
3106:1540-3459
3087:1406.6668
2816:−
2805:−
2772:∑
2694:∫
2681:are both
2598:∈
2582:…
2561:at nodes
2521:prior on
2500:→
2482::
2437:∫
2403:…
2317:∈
2256:→
2248::
2156:⋆
2131:⋯
2128:∣
2052:ν
1974:ν
1954:ν
1917:, where
1762:˙
1686:−
1647:−
1611:⊺
1595:⊺
1565:⊺
1503:†
1406:↦
1294:θ
1285:∇
1262:θ
1236:θ
1156:θ
1125:θ
1086:θ
1069:ℓ
945:θ
921:ℓ
901:∑
881:θ
688:ν
673:∫
610:…
548:ν
495:ν
480:∫
446:…
391:…
137:problem.
104:inference
4766:(1970).
4729:(1963).
4727:Sard, A.
4705:Renyi, A
4592:(1912).
4550:14077995
4405:14077995
4354:14299420
4233:88517317
4149:42880142
4090:32226237
3837:53571618
3589:13932715
3323:12460125
3273:16121233
3223:14696405
3181:26346321
3016:67885786
2919:See also
2912:BackPACK
2888:Software
162:Gaussian
108:learning
84:integral
4913:5877877
4457:Bibcode
4081:7089645
3917:Bibcode
3172:4528661
3151:Bibcode
3114:7245255
2894:ProbNum
2629:is the
2229:in his
92:minimum
4942:
4911:
4807:
4745:
4688:
4548:
4540:
4475:
4403:
4352:
4282:
4231:
4147:
4088:
4078:
3943:
3935:
3835:
3587:
3477:
3321:
3271:
3221:
3179:
3169:
3112:
3104:
3048:
3014:
2967:
2906:Emukit
2872:Wald's
271:
263:
90:, the
34:, and
24:active
22:is an
4909:S2CID
4889:arXiv
4686:S2CID
4570:arXiv
4546:S2CID
4518:arXiv
4473:S2CID
4447:arXiv
4401:S2CID
4381:arXiv
4350:S2CID
4330:arXiv
4280:S2CID
4260:arXiv
4229:S2CID
4209:arXiv
4145:S2CID
4127:arXiv
4030:arXiv
3998:arXiv
3941:S2CID
3907:arXiv
3874:arXiv
3862:(PDF)
3833:S2CID
3813:arXiv
3773:arXiv
3698:arXiv
3677:arXiv
3669:(PDF)
3585:S2CID
3567:arXiv
3540:(PDF)
3397:arXiv
3353:arXiv
3319:S2CID
3269:S2CID
3251:arXiv
3219:S2CID
3199:(PDF)
3141:arXiv
3110:S2CID
3082:arXiv
3012:S2CID
2994:arXiv
2961:(PDF)
2657:is a
2044:is a
1946:is a
1715:exact
4940:ISBN
4829:link
4825:link
4805:ISBN
4743:ISBN
4538:ISSN
4491:link
4163:link
4104:link
4086:PMID
3933:ISSN
3495:link
3475:ISBN
3177:PMID
3102:ISSN
3046:ISBN
2965:ISBN
2383:for
2202:and
1628:and
1582:via
843:for
58:and
4932:doi
4899:doi
4856:doi
4780:doi
4735:doi
4676:doi
4643:doi
4528:doi
4465:doi
4443:384
4391:doi
4340:doi
4270:doi
4219:doi
4137:doi
4076:PMC
4068:doi
3968:doi
3925:doi
3823:doi
3746:doi
3577:doi
3518:doi
3467:doi
3309:doi
3261:doi
3211:doi
3167:PMC
3159:doi
3137:471
3092:doi
3004:doi
1546:or
267:and
118:).
106:or
4962::
4938:.
4907:.
4897:.
4885:59
4883:.
4879:.
4850:.
4846:.
4821:}}
4817:{{
4776:41
4774:.
4770:.
4741:.
4711:.
4684:.
4672:66
4670:.
4666:.
4637:.
4633:.
4621:^
4611:.
4558:^
4544:.
4536:.
4526:.
4514:11
4512:.
4508:.
4487:}}
4483:{{
4471:.
4463:.
4455:.
4441:.
4422:.
4399:.
4389:.
4377:11
4375:.
4371:.
4348:.
4338:.
4326:29
4324:.
4320:.
4301:.
4278:.
4268:.
4256:31
4254:.
4250:.
4227:.
4217:.
4205:29
4203:.
4199:.
4180:.
4159:}}
4155:{{
4143:.
4135:.
4123:30
4121:.
4100:}}
4096:{{
4084:.
4074:.
4064:27
4062:.
4058:.
4044:^
4028:.
4024:.
4012:^
3996:.
3992:.
3980:^
3962:.
3939:.
3931:.
3923:.
3915:.
3903:43
3901:.
3897:.
3870:22
3868:.
3864:.
3845:^
3831:.
3821:.
3809:29
3807:.
3803:.
3787:^
3767:.
3740:.
3721:.
3675:.
3671:.
3651:18
3649:.
3645:.
3583:.
3575:.
3563:34
3561:.
3542:.
3514:29
3512:.
3491:}}
3487:{{
3393:14
3391:.
3347:.
3331:^
3317:.
3305:14
3303:.
3299:.
3281:^
3267:.
3259:.
3247:25
3245:.
3231:^
3217:.
3207:61
3205:.
3201:.
3175:.
3165:.
3157:.
3149:.
3135:.
3131:.
3108:.
3100:.
3090:.
3078:13
3076:.
3072:.
3060:^
3024:^
3010:.
3002:.
2990:29
2988:.
2766:12
2421:.
2214:,
2190:.
2182:,
1706:.
1384:.
831:.
358:.
148:,
144:,
102:,
66:.
54:,
50:,
30:,
4948:.
4934::
4915:.
4901::
4891::
4864:.
4858::
4852:3
4831:)
4813:.
4788:.
4782::
4751:.
4737::
4713:1
4692:.
4678::
4651:.
4645::
4639:2
4613:6
4578:.
4572::
4552:.
4530::
4520::
4493:)
4479:.
4467::
4459::
4449::
4407:.
4393::
4383::
4356:.
4342::
4332::
4286:.
4272::
4262::
4235:.
4221::
4211::
4165:)
4151:.
4139::
4129::
4106:)
4092:.
4070::
4038:.
4032::
4006:.
4000::
3974:.
3970::
3947:.
3927::
3919::
3909::
3882:.
3876::
3839:.
3825::
3815::
3781:.
3775::
3752:.
3748::
3725:.
3706:.
3700::
3685:.
3679::
3591:.
3579::
3569::
3524:.
3520::
3497:)
3483:.
3469::
3405:.
3399::
3361:.
3355::
3325:.
3311::
3275:.
3263::
3253::
3225:.
3213::
3183:.
3161::
3153::
3143::
3116:.
3094::
3084::
3054:.
3018:.
3006::
2996::
2973:.
2829:3
2825:)
2819:1
2813:i
2809:t
2800:i
2796:t
2792:(
2787:n
2782:2
2779:=
2776:i
2763:1
2736:u
2715:t
2711:d
2706:)
2703:t
2700:(
2697:u
2669:u
2645:u
2613:]
2610:b
2607:,
2604:a
2601:[
2593:n
2589:t
2585:,
2579:,
2574:1
2570:t
2549:u
2529:u
2504:R
2497:]
2494:b
2491:,
2488:a
2485:[
2479:u
2458:t
2454:d
2449:)
2446:t
2443:(
2440:u
2409:n
2406:,
2400:,
2397:1
2394:=
2391:i
2369:i
2365:B
2361:=
2358:)
2353:i
2349:a
2345:(
2342:f
2321:N
2314:n
2294:)
2291:x
2288:(
2285:f
2260:R
2252:R
2245:f
2152:u
2125:u
2104:u
2032:)
2029:t
2026:(
2023:v
2003:)
2000:t
1997:(
1994:y
1934:)
1931:t
1928:(
1925:x
1905:)
1902:t
1899:(
1896:v
1892:d
1887:B
1884:+
1881:t
1877:d
1872:)
1869:t
1866:(
1863:x
1860:A
1857:=
1854:)
1851:t
1848:(
1845:x
1841:d
1804:)
1801:)
1798:t
1795:(
1792:y
1789:,
1786:t
1783:(
1780:f
1777:=
1774:)
1771:t
1768:(
1759:y
1694:b
1689:1
1682:A
1678:=
1675:x
1655:y
1650:1
1643:A
1639:=
1636:v
1616:v
1607:x
1603:=
1600:z
1591:b
1570:v
1561:A
1557:=
1554:z
1534:v
1531:A
1528:=
1525:y
1499:A
1495:=
1492:H
1472:x
1452:v
1432:A
1412:v
1409:A
1403:v
1371:|
1367:A
1363:|
1339:b
1336:=
1333:x
1330:A
1297:)
1291:(
1288:L
1265:)
1259:(
1256:L
1216:N
1196:x
1176:y
1136:)
1133:x
1130:(
1121:f
1100:)
1097:)
1094:x
1091:(
1082:f
1078:,
1075:y
1072:(
1046:N
1041:1
1038:=
1035:n
1031:}
1027:)
1022:n
1018:y
1014:,
1009:n
1005:x
1001:(
998:{
995:=
990:D
966:)
963:)
958:n
954:x
950:(
941:f
937:,
932:n
928:y
924:(
916:N
911:1
908:=
905:n
895:N
892:1
887:=
884:)
878:(
875:L
798:f
771:f
725:f
700:)
697:x
694:d
691:(
685:)
682:x
679:(
676:f
652:f
632:)
627:n
623:x
619:(
616:f
613:,
607:,
604:)
599:1
595:x
591:(
588:f
568:f
528:f
507:)
504:x
501:d
498:(
492:)
489:x
486:(
483:f
457:n
453:x
449:,
443:,
438:1
434:x
413:)
408:n
404:x
400:(
397:f
394:,
388:,
385:)
380:1
376:x
372:(
369:f
332:8
329:,
326:3
323:=
320:n
300:0
297:=
294:n
274:8
260:,
257:3
254:,
251:0
248:=
245:n
209:.
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.