Latin
  1. Statistical Sampling Software
  2. Optimal Latin Hypercube
  3. Latin Hypercube Sampling Software For Logic Pro X
  • Latin Hypercube Sampling How Latin Hypercube compares to standard random sampling. Structured Data LLC is a software services and.
  • Latin Hypercube sampling Latin Hypercube sampling, or LHS, is an option that is now available for most risk analysis simulation software programs.
  • This MATLAB function returns an n-by-p matrix, X, containing a latin hypercube sample of n values on each of p variables.

This MATLAB function returns an n-by-p matrix, X, containing a latin hypercube sample of n values on each of p variables. Analog Integrated Circuits And Signal Processing, vol. 131-142, 1997. - 1 - Latin Hypercube Sampling Monte Carlo Estimation of.

Latin hypercube sampling (LHS) is a statistical method for generating a near-random sample of parameter values from a multidimensional distribution. The sampling method is often used to construct computer experiments or for Monte Carlo integration.

The LHS was described by Michael McKay of Los Alamos National Laboratory in 1979.[1] An independently equivalent technique was proposed by Eglājs in 1977.[2] It was further elaborated by Ronald L. Iman and coauthors in 1981.[3] Detailed computer codes and manuals were later published.[4]

In the context of statistical sampling, a square grid containing sample positions is a Latin square if (and only if) there is only one sample in each row and each column. A Latin hypercube is the generalisation of this concept to an arbitrary number of dimensions, whereby each sample is the only one in each axis-aligned hyperplane containing it.

When sampling a function of N{displaystyle N} variables, the range of each variable is divided into M{displaystyle M} equally probable intervals. M{displaystyle M} sample points are then placed to satisfy the Latin hypercube requirements; note that this forces the number of divisions, M{displaystyle M}, to be equal for each variable. Also note that this sampling scheme does not require more samples for more dimensions (variables); this independence is one of the main advantages of this sampling scheme. Another advantage is that random samples can be taken one at a time, remembering which samples were taken so far.

In two dimensions the difference between random sampling, Latin Hypercube sampling, and orthogonal sampling can be explained as follows:

  1. In random sampling new sample points are generated without taking into account the previously generated sample points. One does not necessarily need to know beforehand how many sample points are needed.
  2. In Latin Hypercube sampling one must first decide how many sample points to use and for each sample point remember in which row and column the sample point was taken. Note that such configuration is similar to having N rooks on a chess board without threatening each other.
  3. In Orthogonal sampling, the sample space is divided into equally probable subspaces. All sample points are then chosen simultaneously making sure that the total set of sample points is a Latin Hypercube sample and that each subspace is sampled with the same density.

Thus, orthogonal sampling ensures that the set of random numbers is a very good representative of the real variability, LHS ensures that the set of random numbers is representative of the real variability whereas traditional random sampling (sometimes called brute force) is just a set of random numbers without any guarantees.

References[edit]

  1. ^McKay, M.D.; Beckman, R.J.; Conover, W.J. (May 1979). 'A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code'. Technometrics(JSTOR Abstract) format= requires url= (help). American Statistical Association. 21 (2): 239–245. doi:10.2307/1268522. ISSN0040-1706. JSTOR1268522. OSTI5236110.
  2. ^Eglajs, V.; Audze P. (1977). 'New approach to the design of multifactor experiments'. Problems of Dynamics and Strengths. 35 (in Russian). Riga: Zinatne Publishing House: 104–107.
  3. ^Iman, R.L.; Helton, J.C.; Campbell, J.E. (1981). 'An approach to sensitivity analysis of computer models, Part 1. Introduction, input variable selection and preliminary variable assessment'. Journal of Quality Technology. 13 (3): 174–183. doi:10.1080/00224065.1981.11978748.
  4. ^Iman, R.L.; Davenport, J.M.; Zeigler, D.K. (1980). Latin hypercube sampling (program user's guide). OSTI5571631.

Further reading[edit]

  • Tang, B. (1993). 'Orthogonal Array-Based Latin Hypercubes'. Journal of the American Statistical Association. 88 (424): 1392–1397. doi:10.2307/2291282. JSTOR2291282.
  • Owen, A.B. (1992). 'Orthogonal arrays for computer experiments, integration and visualization'. Statistica Sinica. 2: 439–452.
  • Ye, K.Q. (1998). 'Orthogonal column Latin hypercubes and their application in computer experiments'. Journal of the American Statistical Association. 93 (444): 1430–1439. doi:10.2307/2670057. JSTOR2670057.
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Latin_hypercube_sampling&oldid=912823851'

Latin Hypercube Sampling LHC

JOSE LOPEZ-COLLADO

Statistical Sampling Software

JULY, 2015

After searching fow a while, I finally found a paper describing the LHC sampling (Swiler and Wyss 2004). LHC is a re-scaling function in the domain of a random uniform variate so to have a better dispersion of the input numbers used to generate the pdf deviates. The paper of Swiler & Wyss presents a detailed example of the algorithm so anybody can check the results and the algorithm itself (pages 2-9 in the paper).

In essence, the sample size ss serves to divide the sampling space into ss categories and then the U values are re-scaled to the new limits:

(* HERE IS THE MATHEMATICA CODE WITH COMMENTS *)

SetDirectory[NotebookDirectory[]];

wdist = WeibullDistribution[1.5, 3];

dname = wdist;

(* ss is the sample size*)

ss= 1500;

(*scaleu is the LHS re-scaling, very simple indeed! *)

scaleu[u_, i_, ss_] := u (1/ss) + ((i - 1)/ss);

(* set function as listable, capable of handling lists*)

Optimal Latin Hypercube

SetAttributes[scaleu, Listable];

(*get a list of uniform random numbers*)

un = RandomVariate[UniformDistribution[{0, 1}], ss];

(*Get a sequence of integers, 1,2,3,.. ,ss*)

strata = Range[ss];

(*Re-distribute the random numbers using LHC*)

usc = scaleu[un, strata, ss];

(*Obtain a random number from the list above, in this example is wdist, a Weibull Distribution with shape and scale parameters of 1.5 and 3 respectively*)

(*Obtain a list of random numbers using LHC, check that we use the inverse of the cumulative density function CDF to translate from the re-scaled U values to their pdf values *)

pvL = Map[InverseCDF[dname, #] &, usc];

(* get some statistics, mean and standard deviation *)

mL = Mean[pvL]

2.70744

sdL = StandardDeviation[pvL]

Latin Hypercube Sampling Software For Logic Pro X

1.83532

(*The next call is is the conventional random sampling, NOT the LHC, it uses the built-in Mathematica function RandomVariate and the distribution name and sample size as arguments *)

pvR = RandomVariate[dname, ss];

(* get the same statistics: mean and standard deviation*)

mR = Mean[pvR]

2.62812

sdR = StandardDeviation[pvR]

1.7902

(* Draw the distributions, Latin Hypercube on the left, regular sampling on the right *)

GraphicsRow[{Histogram[pvL], Histogram[pvR]}]