Statistica Advanced

Statistica Advanced combines functionality from several different areas and includes all of the features included in the Statistica Base package. Statistica Advanced includes tools for linear/nonlinear models, multivariate exploratory techniques, and power analysis and interval estimation. Summaries of these areas and be found below and in depth descriptions of all the included modules can be found on the modules tab.

Advanced Linear/Nonlinear Models offers a wide array of the most advanced linear and nonlinear modeling tools on the market; supports continuous and categorical predictors, interactions, and hierarchical models; includes automatic model selection facilities as well as variance components, time series, and many other methods; and all analyses incorporate extensive, interactive graphical support and built-in complete Visual Basic scripting.

It features the following modules:

  • Distribution and Simulation
  • Variance Components and Mixed Model ANOVA/ANCOVA
  • Survival/Failure Time Analysis
  • Cox Proportional Hazard Model
  • General Nonlinear Estimation (and Quick Logit/Probit Regression)
  • Log-Linear Analysis of Frequency Tables
  • Time Series Analysis/Forecasting
  • Structural Equation Modeling/Path Analysis (SEPATH)
  • General Linear Models (GLM)
  • General Regression Models (GRM)
  • Generalized Linear Models (GLZ)
  • General Partial Least Squares Models(PLS)

Statistica's Exploratory Data Analysis tools helped us identify that environmental conditions at one plant were contributing to process variation. An overall reduction in variation is the end result.

Robert Brey, R&D Group
Georgia-Pacific Resins, Inc

Multivariate Exploratory Techniques offers a broad selection of exploratory techniques, from cluster analysis to advanced classification trees methods, with an endless array of interactive visualization tools for exploring relationships and patterns; built-in complete Visual Basic scripting.

Multivarite tools included:

  • Cluster Analysis Techniques
  • Factor Analysis
  • Principal Components & Classification Analysis
  • Canonical Correlation Analysis
  • Reliability/Item Analysis
  • Classification Trees
  • Correspondence Analysis
  • Multidimensional Scaling
  • Discriminant Analysis
  • General Discriminant Analysis Models (GDA)

Power Analysis and Interval Estimation is a powerful toolset for planning and analyzing your research and you can always be confident that you are using your resources most efficiently. Nothing is more disappointing than realizing that your research findings lack precision because your sample size was too small. On the other hand, using a sample size that is too large could be a significant waste of time and resources.

Power Analysis and Interval Estimation will help you find the ideal sample size and enrich your research with a variety of tools for estimating confidence intervals.

Power Analysis and Interval Estimation functions include:

  • Advantages
  • Power Calculation
  • Sample Size Calculation
  • Interval Estimation
  • Probability Distributions
  • List of Tests
  • Example Application

Statistica Base Modules

Descriptive Statistics, Breakdowns and Exploratory Data Analysis

Descriptive Statistics and Graphs

Descriptive statistics, breakdowns, and exploratory data analysisThe program will compute practically all common, general-purpose descriptive statistics including medians, modes, quartiles, user-specified percentiles, average and standard deviations, quartile ranges, confidence limits for the mean, skewness and kurtosis (with their respective standard errors), harmonic means, geometric means, as well as many specialized descriptive statistics and diagnostics, either for all cases or broken down by one or more categorical (grouping) variables. As with all modules of Statistica, a wide variety of graphs will aid exploratory analyses, e.g., various types of box-and-whisker plots, histograms, bivariate distribution (3D or categorized) histograms, 2D and 3D scatterplots with marked subsets, normal, half-normal, detrended probability plots, Q-Q plots, P-P plots, etc. A selection of tests is available for fitting the normal distribution to the data (via the Kolmogorov-Smirnov, Lilliefors, and Shapiro-Wilks' tests; facilities for fitting a wide variety of other distributions are also available; see also Statistica Process Analysis; and the section on fitting in the Graphics section).

By-Group Analyses (Breakdowns)

By-Group Analyses (Breakdowns) Practically all descriptive statistics as well as summary graphs can be computed for data that are categorized (broken down) by one or more grouping variables. For example, with just a few mouse clicks the user can break down the data by Gender and Age and review categorized histograms, box-and-whisker plots, normal probability plots, scatterplots, etc. If more than two categorical variables are chosen, cascades of the respective graphs can be automatically produced. Options to categorize by continuous variables are provided, e.g., you can request that a variable be split into a requested number of intervals, or use the on-line recode facility to custom-define the way in which the variable will be recoded (categorization options of practically unlimited complexity can be specified at any point and they can reference relations involving all variables in the dataset). In addition, a specialized hierarchical breakdown procedure is provided that allows the user to categorize the data by up to six categorical variables, and compute a variety of categorized graphs, descriptive statistics, and correlation matrices for subgroups (the user can interactively request to ignore some factors in the complete breakdown table, and examine statistics for any marginal tables). Numerous formatting and labeling options allow the user to produce publication-quality tables and reports with long labels and descriptions of variables. Note that extremely large analysis designs can be specified in the breakdown procedure (e.g., 100,000 groups for a single categorization variable), and results include all relevant ANOVA statistics (including the complete ANOVA table, tests of assumptions such as the Levene and Brown-Forsythe tests for homogeneity of variance, a selection of seven post-hoc tests, etc.). As in all other modules of Statistica, extended precision calculations (the "quadruple" precision, where applicable) are used to provide an unmatched level of accuracy (see the section on Precision). Because of the interactive nature of the program, exploration of data is very easy. For example, exploratory graphs can be produced directly from all results Spreadsheets by pointing with the mouse to specific cells or ranges of cells. Cascades of even complex (e.g., multiple categorized) graphs can be produced with a single-click of the mouse and reviewed in a slide-show manner. In addition to numerous predefined Statistical graphs, countless graphical visualizations of raw data, summary statistics, relations between statistics, as well as all breakdowns and categorizations can be custom-defined by the user via straightforward point-and-click facilities designed to reduce the necessary number of mouse clicks. All exploratory graphical techniques (described in the section on Graphics) are integrated with statistics to facilitate graphical data analyses (e.g., via interactive outlier removal, subset selections, smoothing, function fitting, extensive brushing options allowing the user to easily identify and/or extract the selected data, etc.). See also the section on Block Statistics, below.

Correlations

Correlations A comprehensive set of options allows for the exploration of correlations and partial correlations between variables. First, practically all common measures of association can be computed, including Pearson r, Spearman rank order R, Kendall tau (b, c), Gamma, tetrachoric r, Phi, Cramer V, contingency coefficient C, Sommer's D, uncertainty coefficients, part and partial correlations, autocorrelations, various distance measures, etc. Correlation matrices can be computed using casewise (listwise) or pairwise deletion of missing data, or mean substitution. As in all other modules of Statistica, extended precision calculations (the "quadruple" precision, where applicable) are used to yield an unmatched level of accuracy (see the section on Precision). Like all other results in Statistica, correlation matrices are displayed in Spreadsheets offering various formatting options (see below) and extensive facilities to visualize numerical results; the user can "point to" a particular correlation in the Spreadsheet and choose to display a variety of "graphical summaries" of the coefficient (e.g., scatterplots with confidence intervals, various 3D bivariate distribution histograms, probability plots, etc.).

Brushing and outlier detection

The extensive brushing facilities in the scatterplots allow the user to select/deselect individual points in the plot and assess their effect on the regression line (or other fitted function lines).

Display formats of numbers

A variety of global display formats for correlations are supported; significant correlation coefficients can be automatically highlighted, each cell of the Spreadsheet can be expanded to display n and p, or detailed results may be requested that include all descriptive statistics (pairwise means and standard deviations, B weights, intercepts, etc.). Like all other numerical results, correlation matrices are displayed in Spreadsheets offering the zoom option and interactively-controlled display formats (e.g., from +.4 to +.4131089276410193); thus, large matrices can be compressed (via either the zoom or format-width control adjustable by dragging) to facilitate the visual search for coefficients which exceed a user-specified magnitude or significance level (e.g., the respective cells can be marked red in the Spreadsheet).

Scatterplot, scatterplot matrices, by-group analyses

Scatterplot, scatterplot matrices, by-group analyses As in all output selection dialogs, numerous global graphics options are available to further study patterns of relationships between variables, e.g., 2D and 3D scatterplots (with or without case labels) designed to identify patterns of relations across subsets of cases or series of variables. Correlation matrices can be computed as categorized by grouping variables and visualized via categorized scatterplots. Also "breakdowns of correlation matrices" can be generated (one matrix per subset of data), displayed in queues of Spreadsheets, and saved as stacked correlation matrices (which can later be used as input into the Structural Equations Modeling and Path Analysis [SEPATH] module). An entire correlation matrix can be summarized in a single graph via the Matrix scatterplot option (of practically unlimited density); large scatterplot matrices can then be reviewed interactively by "zooming in" on selected portions of the graph (or scrolling large graphs in the zoom mode) [see the illustration]. Also, categorized scatterplot matrix plots can be generated (one matrix plot for each subset of data). Alternatively, a multiple-subset scatterplot matrix plot can be created where specific subsets of data (e.g., defined by levels of a grouping variable or selection conditions of any complexity) are marked with distinctive point markers. Various other graphical methods can be used to visualize matrices of correlations in search of global patterns (e.g., contour plots, non-smoothed surfaces, icons, etc.). All of these operations require only a few mouse clicks and various shortcuts are provided to simplify selections of analyses; any number of Spreadsheets and graphs can be displayed simultaneously on the screen, making interactive exploratory analyses and comparisons very easy.

Basic Statistics From Results Spreadsheets (Tables)

Basic StatisticsStatistica is a single integrated analysis system that presents all numerical results in spreadsheet tables that are suitable (without any further modification) for input into subsequent analyses. Thus, basic statistics (or any other Statistical analysis) can be computed for results tables from previous analyses; for example, you could very quickly compute a table of means for 2000 variables, and next use this table as an input data file to further analyze the distribution of those means across the variables. Thus, basic statistics are available at any time during your analyses, and can be applied to any results spreadsheet.

Block Statistics

In addition to the detailed descriptive statistics that can be computed for every spreadsheet, you can also highlight blocks of numbers in any spreadsheet, and produce basic descriptive statistics or graphs for the respective subset of numbers only. For example, suppose you computed a results spreadsheet with measures of central tendency for 2000 variables (e.g., with Means, Modes, and Medians, Geometric Means, and Harmonic Means); you could highlight a block of, for example, 200 variables and the Means and Medians, and then in a single operation produce a multiple line graph of those two measures across the subset of 200 variables. Statistical analysis by blocks can be performed by row or by column; for example, you could also compute a multiple line graph for a subset of variables across the different measures of central tendency. To summarize, the block statistics facilities allow you to produce statistics and Statistical graphs from values in arbitrarily selected (highlighted) blocks of values in the current data spreadsheet or output Spreadsheet.

Interactive Probability Calculator

Interactive Probability Calculator A flexible, interactive Probability Calculator is accessible from all toolbars. It features a wide selection of distributions (including Beta, Cauchy, Chi-square, Exponential, Extreme value, F, Gamma, Laplace, Lognormal, Logistic, Pareto, Rayleigh, t (Student), Weibull, and Z (Normal)); interactively (in-place) updated graphs built into the dialog (a plot of the density and distribution functions) allow the user to visually explore distributions taking advantage of the flexible Statistica Smart MicroScrolls which allow the user to advance either the last significant digit (press the LEFT-mouse-button) or next to the last significant digit (press the RIGHT-mouse-button). Facilities are provided for generating customizable, compound graphs of distributions with requested cutoff areas. Thus, this calculator allows you to interactively explore the distributions (e.g., the respective probabilities depending on shape parameters).

t-Tests and Other Tests of Group Differences

T-tests and Other Tests of Group Differences T-tests for dependent and independent samples, as well as single samples (testing means against user-specified constants) can be computed, multivariate Hotelling's T 2 tests are also available (see also ANOVA/MANOVA, and GLM (General Linear Models)). Flexible options are provided to allow comparisons between variables (e.g., treating the data in each column of the input spreadsheet as a separate sample) and coded groups (e.g., if the data includes a categorical variable such as Gender to identify group membership for each case). As with all procedures, extensive diagnostics and graphics options are available from the results menus. For example, for the t-test for independent samples, options are provided to compute t-tests with separate variance estimates, Levene and Brown-Forsythe tests for homogeneity of variance, various box-and-whisker plots, categorized histograms and probability plots, categorized scatterplots, etc. Other (more specialized) tests of group differences are part of many modules (e.g., Nonparametrics, Survival Analysis, Reliability/Item Analysis).

Frequency Tables, Crosstabulation Tables, Stub-and-Banner Tables, Multiple Response Analysis, and Tables

Frequency Tables, Crosstabulation Tables, Stub-And-Banner Tables, Multiple Response Analysis, And TablesExtensive facilities are provided to tabulate continuous, categorical, and multiple response variables, or multiple dichotomies. A wide variety of options are offered to control the layout and format of the tables. For example, for tables involving multiple response variables or multiple dichotomies, marginal counts and percentages can be based on the total number of respondents or responses, multiple response variables can be processed in pairs, and various options are available for counting (or ignoring) missing data. Frequency tables can also be computed based on user-defined logical selection conditions (of any complexity, referencing any relationships between variables in the dataset) that assign cases to categories in the table. All tables can be extensively customized to produce final (publication-quality) reports. For example, unique "multi-way summary" tables can be produced with breakdown-style, hierarchical arrangements of factors, crosstabulation tables may report row, column, and total percentages in each cell, long value labels can be used to describe the categories in the table, frequencies greater than a user-defined cutoff can be highlighted in the table, etc. The program can display cumulative and relative frequencies, Logit- and Probit-transformed frequencies, normal expected frequencies (and the Kolmogorov-Smirnov, Lilliefors, and Shapiro-Wilks' tests), expected and residual frequencies in crosstabulations, etc. Available Statistical tests for crosstabulation tables include the Pearson, Maximum-Likelihood and Yates-corrected Chi-squares; McNemar's Chi-square, the Fisher exact test (one- and two-tailed), Phi, and the tetrachoric r; additional available statistics include Kendall's tau (a, b), Gamma, Spearman r, Sommer's D, uncertainty coefficients, etc.

Graphs

Graphical options include simple, categorized (multiple), and 3D histograms, cross-section histograms (for any "slices" of the one-, two-, or multi-way tables), and many other graphs including a unique "interaction plot of frequencies" that summarizes the frequencies for complex crosstabulation tables (similar to plots of means in ANOVA). Cascades of even complex (e.g., multiple categorized, or interaction) graphs can be interactively reviewed. See also the section on Block Statistics, above, and sections on Log-linear Analysis and Correspondence Analysis.

Multiple Regression Methods

Multiple RegressionThe Multiple Regression module is a comprehensive implementation of linear regression techniques, including simple, multiple, stepwise (forward, backward, or in blocks), hierarchical, nonlinear (including polynomial, exponential, log, etc.), Ridge regression, with or without intercept (regression through the origin), and weighted least squares models; additional advanced methods are provided in the General Regression Models (GRM) module (e.g., best subset regression, multivariate stepwise regression for multiple dependent variables, for models that may include categorical factor effects; Statistical summaries for validation and prediction samples, custom hypotheses, etc.). The Multiple Regression module will calculate a comprehensive set of statistics and extended diagnostics including the complete regression table (with standard errors for B, Beta and intercept, R-square and adjusted R-square for intercept and non-intercept models, and ANOVA table for the regression), part and partial correlation matrices, correlations and covariances for regression weights, the sweep matrix (matrix inverse), the Durbin-Watson d statistic, Mahalanobis and Cook's distances, deleted residuals, confidence intervals for predicted values, and many others.

Predicted and residual values

The extensive residual and outlier analysis features a large selection of plots, including a variety of scatterplots, histograms, normal and half-normal probability plots, detrended plots, partial correlation plots, different casewise residual and outlier plots and diagrams, and others. The scores for individual cases can be visualized via exploratory icon plots and other multidimensional graphs integrated directly with the results Spreadsheets. Residual and predicted scores can be appended to the current data file. A forecasting routine allows the user to perform what-if analyses, and to interactively compute predicted scores based on user-defined values of predictors.

By-group analysis; related procedures

Extremely large regression designs can be analyzed. An option is also included to perform multiple regression analyses broken down by one or more categorical variable (multiple regression analysis by group); additional add-on procedures include a regression engine that supports models with thousands of variables, a Two-stage Least Squares regression, as well as Box-Cox and Box-Tidwell transformations with graphs. Statistica Advance also includes general nonlinear estimation modules (Nonlinear Estimation, Generalized Linear Models (GLZ), Partial Least Squares models (PLS)) that can estimate practically any user-defined nonlinear model, including Logit, Probit, and others. The program also includes SEPATH, the general Structural Equation Modeling and Path Analysis module, which allows the user to analyze extremely large correlations, covariances, and moment matrices (for intercept models). An implementation of Generalized Additive Models (GAM) is also available in Statistica Data Miner.

Nonparametric Statistics

Nonparametric StatisticsThe Nonparametric Statistics module features a comprehensive selection of inferential and descriptive statistics including all common tests and some special application procedures. Available Statistical procedures include the Wald-Wolfowitz runs test, Mann-Whitney U test (with exact probabilities [instead of the Z approximations] for small samples), Kolmogorov-Smirnov tests, Wilcoxon matched pairs test, Kruskal-Wallis ANOVA by ranks, Median test, Sign test, Friedman ANOVA by ranks, Cochran Q test, McNemar test, Kendall coefficient of concordance, Kendall tau (b, c), Spearman rank order R, Fisher's exact test, Chi-square tests, V-square statistic, Phi, Gamma, Sommer's d, contingency coefficients, and others. (Specialized nonparametric tests and statistics are also part of many add-on modules, e.g., Survival Analysis, Process Analysis, and others.) All (rank order) tests can handle tied ranks and apply corrections for small n or tied ranks. The program can handle extremely large analysis designs. As in all other modules of Statistica, all tests are integrated with graphs (that include various scatterplots, specialized box-and-whisker plots, line plots, histograms and many other 2D and 3D displays).

ANOVA/MANOVA

The ANOVA/MANOVA module includes a subset of the functionality of the General Linear Models module and can perform univariate and multivariate analysis of variance of factorial designs with or without one repeated measures variable. For more complicated linear models with categorical and continuous predictor variables, random effects, and multiple repeated measures factors you need the General Linear Models module (stepwise and best-subset options are available in the General Regression Models module). In the ANOVA/MANOVA module, you can specify all designs in the most straightforward, functional terms of actual variables and levels (not in technical terms, e.g., by specifying matrices of dummy codes), and even less-experienced ANOVA users can analyze very complex designs with Statistica. Like the General Linear Models module, ANOVA/MANOVA provides three alternative user interfaces for specifying designs: (1) A Design Wizard, that will take you step-by-step through the process of specifying a design, (2) a simple dialog-based user-interface that will allow you to specify designs by selecting variables, codes, levels, and any design options from well-organized dialogs, and (3) a Syntax Editor for specifying designs and design options using keywords and a common design syntax. Computational methods. The program will use, by default, the sigma restricted parameterization for factorial designs, and apply the effective hypothesis approach (see Hocking, 19810) when the design is unbalanced or incomplete. Type I, II, III, and IV hypotheses can also be computed, as can Type V and Type VI hypotheses that will perform tests consistent with the typical analyses of fractional factorial designs in industrial and quality-improvement applications (see also the description of the Experimental Design module).

Results statistics

The ANOVA/MANOVA module is not limited in any of its computational routines for reporting results, so the full suite of detailed analytic tools available in the General Linear Models module is also available here; results include summary ANOVA tables, univariate and multivariate results for repeated measures factors with more than 2 levels, the Greenhouse-Geisser and Huynh-Feldt adjustments, plots of interactions, detailed descriptive statistics, detailed residual statistics, planned and post-hoc comparisons, testing of custom hypotheses and custom error terms, detailed diagnostic statistics and plots (e.g., histogram of within-cell residuals, homogeneity of variance tests, plots of means versus standard deviations, etc.).

Distribution Fitting

Distribution FittingThe Distribution Fitting options allow the user to compare the distribution of a variable with a wide variety of theoretical distributions. You may fit to the data the Normal, Rectangular, Exponential, Gamma, Lognormal, Chi-square, Weibull, Gompertz, Binomial, Poisson, Geometric, or Bernoulli distribution. The fit can be evaluated via the Chi-square test or the Kolmogorov-Smirnov one-sample test (the fitting parameters can be controlled); the Lilliefors and Shapiro-Wilks' tests are also supported (see above). In addition, the fit of a particular hypothesized distribution to the empirical distribution can be evaluated in customized histograms (standard or cumulative) with overlaid selected functions; line and bar graphs of expected and observed frequencies, discrepancies and other results can be produced from the output Spreadsheets. Other distribution fitting options are available in Process Analysis, where the user can compute maximum-likelihood parameter estimates for the Beta, Exponential, Extreme Value (Type I, Gumbel), Gamma, Log-Normal, Rayleigh, and Weibull distributions. Also included in that module are options for automatically selecting and fitting the best distribution for the data, as well as options for general distribution fitting by moments (via Johnson and Pearson curves). User-defined 2- and 3-dimensional functions can also be plotted and overlaid on the graphs. The functions may reference a wide variety of distributions such as the Beta, Binomial, Cauchy, Chi-square, Exponential, Extreme value, F, Gamma, Geometric, Laplace, Logistic, Normal, Log-Normal, Pareto, Poisson, Rayleigh, t (Student), or Weibull distribution, as well as their integrals and inverses. Additional facilities to fit predefined or user-defined functions of practically unlimited complexity to the data are available in Nonlinear Estimation.

Multivariate Exploratory Technique Modules

Cluster AnalysisCluster Analysis

Cluster AnalysisThis module includes a comprehensive implementation of clustering methods (k-means, hierarchical clustering, two-way joining). The program can process data from either raw data files or matrices of distance measures. The user can cluster cases, variables, or both based on a wide variety of distance measures (including Euclidean, squared Euclidean, City-block (Manhattan), Chebychev, Power distances, Percent disagreement, and 1-r) and amalgamation/linkage rules (including single, complete, weighted and unweighted group average or centroid, Ward's method, and others). Matrices of distances can be saved for further analysis with other modules of the Statistica system. In k-means clustering, the user has full control over the initial cluster centers. Extremely large analysis designs can be processed; for example, hierarchical (tree) joining can analyze matrices with over 1,000 variables, or with over 1 million distances. In addition to the standard cluster analysis output, a comprehensive set of descriptive statistics and extended diagnostics (e.g., the complete amalgamation schedule with cohesion levels in hierarchical clustering, the ANOVA table in k-means clustering) is available. Cluster membership data can be appended to the current data file for further processing. Graphics options in the Cluster Analysis module include customizable tree diagrams, discrete contour-style two-way joining matrix plots, plots of amalgamation schedules, plots of means in k-means clustering, and many others.

Factor AnalysisFactor Analysis

Factor AnalysisThe Factor Analysis module contains a wide range of statistics and options, and provides a comprehensive implementation of factor (and hierarchical factor) analytic techniques with extended diagnostics and a wide variety of analytic and exploratory graphs. It will perform principal components, common, and hierarchical (oblique) factor analysis, and can handle extremely large analysis problems (e.g., with thousands of variables). Confirmatory factor analysis (as well as path analysis) can also be performed via the Structural Equation Modeling and Path Analysis (SEPATH) module found in Statistica Advanced Linear/Non-Linear Models.

Principal Components and Classification AnalysisPrincipal Components & Classification Analysis

Statistica also includes a designated program for principal components and classification analysis. The output includes eigenvalues (regular, cumulative, relative), factor loadings, factor scores (which can be appended to the input data file, reviewed graphically as icons, and interactively recoded), and a number of more technical statistics and diagnostics. Available rotations include Varimax, Equimax, Quartimax, Biquartimax (either normalized or raw), and Oblique rotations. The factorial space can be plotted and reviewed "slice by slice" in either 2D or 3D scatterplots with labeled variable-points; other integrated graphs include Scree plots, various scatterplots, bar and line graphs, and others. After a factor solution is determined, the user can recalculate (i.e., reconstruct) the correlation matrix from the respective number of factors to evaluate the fit of the factor model. Both raw data files and matrices of correlations can be used as input. Confirmatory factor analysis and other related analyses can be performed with the Structural Equation Modeling and Path Analysis (SEPATH) module available in Statistica Advanced Linear/Non-Linear Models, where a designated Confirmatory Factor Analysis Wizard will guide you step by step through the process of specifying the model.

Principal Components and Classification Analysis  Click here to read a real-life application story using Statistica's Principal Components Analysis tools.

Canonical Correlation AnalysisCanonical Correlation Analysis

Canonical Correlation Analysis This module offers a comprehensive implementation of canonical analysis procedures; it can process raw data files or correlation matrices and it computes all of the standard canonical correlation statistics (including eigenvectors, eigenvalues, redundancy coefficients, canonical weights, loadings, extracted variances, significance tests for each root, etc.) and a number of extended diagnostics. The scores of canonical variates can be computed for each case, appended to the data file, and visualized via integrated icon plots. The Canonical Analysis module also includes a variety of integrated graphs (including plots of eigenvalues, canonical correlations, scatterplots of canonical variates, and many others). Note that confirmatory analyses of structural relationships between latent variables can also be performed via the SEPATH (Structural Equation Modeling and Path Analysis) module. Advanced stepwise and best-subset selection of predictor variables for MANOVA/MANCOVA designs (with multiple dependent variables) is available in the General Regression Models (GRM) module.

Reliability/Item AnalysisReliability/Item Analysis

Reliability/Item AnalysisThis module includes a comprehensive selection of procedures for the development and evaluation of surveys and questionnaires. As in all other modules of Statistica, extremely large designs can be analyzed. The user can calculate reliability statistics for all items in a scale, interactively select subsets, or obtain comparisons between subsets of items via the "split-half" (or split-part) method. In a single run, the user can evaluate the reliability of a sum-scale as well as subscales. When interactively deleting items, the new reliability is computed instantly without processing the data file again. The output includes correlation matrices and descriptive statistics for items, Cronbach alpha, the standardized alpha, the average inter-item correlation, the complete ANOVA table for the scale, the complete set of item-total statistics (including multiple item-total R's), the split-half reliability, and the correlation between the two halves corrected for attenuation. A selection of graphs (including various integrated scatterplots, histograms, line plots and other plots) and a set of interactive what-if procedures are provided to aid in the development of scales. For example, the user can calculate the expected reliability after adding a particular number of items to the scale, and can estimate the number of items that would have to be added to the scale in order to achieve a particular reliability. Also, the user can estimate the correlation corrected for attenuation between the current scale and another measure (given the reliability of the current scale).

Classification TreesClassification Trees

Classification TreesStatistica's Classification Trees module provides a comprehensive implementation of the most recently developed algorithms for efficiently producing and testing the robustness of classification trees (a classification tree is a rule for predicting the class of an object from the values of its predictor variables). Statistica Data Miner offers additional advanced methods for tree classifications such as Boosted Trees, Random Forests, General Classification and Regression Tree Models (GTrees) and General CHAID (Chi-square Automatic Interaction Detection) models facilities. Classification trees can be produced using categorical predictor variables, ordered predictor variables, or both, and using univariate splits or linear combination splits.

Classification TreesAnalysis options include performing exhaustive splits or discriminant-based splits; unbiased variable selection (as in QUEST); direct stopping rules (as in FACT) or bottom-up pruning (as in C&RT); pruning based on misclassification rates or on the deviance function; generalized Chi-square, G-square, or Gini-index goodness of fit measures. Priors and misclassification costs can be specified as equal, estimated from the data, or user-specified. The user can also specify the v value for v-fold cross-validation during tree building, v value for v-fold cross-validation for error estimation, size of the SE rule, minimum node size before pruning, seeds for random number generation, and alpha value for variable selection. Integrated graphics options are provided to explore the input and output data.

Correspondence AnalysisCorrespondence Analysis

Correspondence AnalysisThis module features a full implementation of simple and multiple correspondence analysis techniques, and can analyze even extremely large tables. The program will accept input data files with grouping (coding) variables that are to be used to compute the crosstabulation table, data files that contain frequencies (or some other measure of correspondence, association, similarity, confusion, etc.) and coding variables that identify (enumerate) the cells in the input table, or data files with frequencies (or other measure of correspondence) only (e.g., the user can directly type in and analyze a frequency table). For multiple correspondence analysis, the user can also directly specify a Burt table as input for the analysis. The program will compute various tables, including the table of row percentages, column percentages, total percentages, expected values, observed minus expected values, standardized deviates, and contributions to the Chi-square values. The Correspondence Analysis module will compute the generalized eigenvalues and eigenvectors, and report all standard diagnostics including the singular values, eigenvalues, and proportions of inertia for each dimension. The user can either manually choose the number of dimensions, or specify a cutoff value for the maximum cumulative percent of inertia. The program will compute the standard coordinate values for column and row points. The user has the choice of row-profile standardization, column-profile standardization, row and column profile standardization, or canonical standardization. For each dimension and row or column point, the program will compute the inertia, quality, and cosine-square values. In addition, the user can display (in spreadsheets) the matrices of the generalized singular vectors; like the values in all spreadsheets, these matrices can be accessed via Statistica Visual Basic, for example, in order to implement non-standard methods of computing the coordinates. The user can compute coordinate values and related statistics (quality and cosine-square values) for supplementary points (row or column), and compare the results with the regular row and column points. Supplementary points can also be specified for multiple correspondence analysis. In addition to the 3D histograms that can be computed for all tables, the user can produce a line plot for the eigenvalues, and 1D, 2D, and 3D plots for the row or column points. Row and column points can also be combined in a single graph, along with any supplementary points (each type of point will use a different color and point marker, so the different types of points can easily be identified in the plots). All points are labeled, and an option is available to truncate the names for the points to a user-specified number of characters.

Multidimensional ScalingMultidimensional Scaling

Multidimensional ScalingThe Multidimensional Scaling module includes a full implementation of (nonmetric) multidimensional scaling. Matrices of similarities, dissimilarities, or correlations between variables (i.e., "objects" or cases) can be analyzed. The starting configuration can be computed by the program (via principal components analysis) or specified by the user. The program employs an iterative procedure to minimize the stress value and the coefficient of alienation. The user can monitor the iterations and inspect the changes in these values. The final configurations can be reviewed via spreadsheets, and via 2D and 3D scatterplots of the dimensional space with labeled item-points. The output includes the values for the raw stress (raw F), Kruskal stress coefficient S, and the coefficient of alienation. The goodness of fit can be evaluated via Shepard diagrams (with d-hats and d-stars). Like all other results in Statistica, the final configuration can be saved to a data file.

Discriminant AnalysisDiscriminant Analysis

Discriminant AnalysisThe Discriminant Analysis module is a full implementation of multiple stepwise discriminant function analysis. Statistica also includes the General Discriminant Analysis Models module (below) for fitting ANOVA/ANCOVA-like designs to categorical dependent variables, and to perform various advanced types of analyses (e.g., best subset selection of predictors, profiling of posterior probabilities, etc.). The Discriminant Analysis program will perform forward or backward stepwise analyses, or enter user-specified blocks of variables into the model.

In addition to the numerous graphics and diagnostics describing the discriminant functions, the program also provides a wide range of options and statistics for the classification of old or new cases (for validation of the model). The output includes the respective Wilks' lambdas, partial lambdas, F to enter (or remove), the p levels, the tolerance values, and the R-square. The program will perform a full canonical analysis and report the raw and cumulative eigenvalues for all roots, and their p levels, the raw and standardized discriminant (canonical) function coefficients, the structure coefficient matrix (of factor loadings), the means for the discriminant functions, and the discriminant scores for each case (which can also be automatically appended to the data file). Integrated graphs include histograms of the canonical scores within each group (and all groups combined), special scatterplots for pairs of canonical variables (where group membership of individual cases is visibly marked), a comprehensive selection of categorized (multiple) graphs allowing the user to explore the distribution and relations between dependent variables across the groups (including multiple box-and-whisker plots, histograms, scatterplots, and probability plots), and many others. The Discriminant Analysis module will also compute the standard classification functions for each group. The classification of cases can be reviewed in terms of Mahalanobis distances, posterior probabilities, or actual classifications, and the scores for individual cases can be visualized via exploratory icon plots and other multidimensional graphs integrated directly with the results spreadsheets. All of these values can be automatically appended to the current data file for further analyses. The summary classification matrix of the number and percent of correctly classified cases can also be displayed. The user has several options to specify the a priori classification probabilities and can specify selection conditions to include or exclude selected cases from the classification (e.g., to validate the classification functions in a new sample).

General Discriminant Analysis Models (GDA)General Discriminant Analysis Models (GDA)

The Statistica General Discriminant Analysis (GDA) module is an application and extension of the General Linear Model to classification problems. Like the Discriminant Analysis module, GDA allows you to perform standard and stepwise discriminant analyses. GDA implements the discriminant analysis problem as a special case of the general linear model, and thereby offers extremely useful analytic techniques that are innovative, efficient, and extremely powerful. As in traditional discriminant analysis, GDA allows you to specify a categorical dependent variable. For the analysis, the group membership (with regard to the dependent variable) is then coded into indicator variables, and all methods of GRM can be applied. In the results dialogs, the extensive selection of residual statistics of GRM and GLM are available in GDA as well. GDA provides powerful and efficient tools for data mining as well as applied research. GDA will compute all standard results for discriminant analysis, including discriminant function coefficients, canonical analysis results (standardized and raw coefficients, step-down tests of canonical roots, etc.), classification statistics (including Mahalanobis distances, posterior probabilities, actual classification of cases in the analysis sample and validation sample, misclassification matrix, etc.), and so on.

Computational approach and unique applications. As in traditional discriminant analysis, GDA allows you to specify a categorical dependent variable. For the analysis, the group membership (with regard to the dependent variable) is then coded into indicator variables, and all methods of GRM can be applied. In the results dialogs, the extensive selection of residual statistics of GRM and GLM are available in GDA as well; for example, you can review all the regression-like residuals and predicted values for each group (each coded dependent indicator variable), and choose from the large number of residual plots. In addition, all specialized prediction and classification statistics are computed that are commonly reviewed in a discriminant analysis; but those statistics can be reviewed in innovate ways because of Statistica's unique approach. For example, you can perform "desirability profiling" by combining the posterior prediction probabilities for the groups into a desirability score, and then let the program find the values or combination of categorical predictor settings that will optimize that score. Thus, GDA provides powerful and efficient tools for data mining as well as applied research; for example, you could use the DOE (Design of Experiments) methods to generate an experimental design for quality improvement, apply this design to categorical outcome data (e.g., distinct classifications of an outcome as "superior," "acceptable," or "failed"), and then model the posterior prediction probabilities of those outcomes using the variables of your experimental design.

Standard discriminant analysis results. Statistica GDA will compute all standard results for discriminant analysis, including discriminant function coefficients, canonical analysis results (standardized and raw coefficients, step-down tests of canonical roots, etc.), classification statistics (including Mahalanobis distances, posterior probabilities, actual classification of cases in the analysis sample and validation sample, misclassification matrix, etc.), and so on.

Unique features of GDA, currently only available in Statistica. In addition, Statistica GDA includes numerous unique features and results:

Specifying predictor variables and effects; model building:

1. Support for continuous and categorical predictors, instead of allowing only continuous predictors in the analysis (the common limitation in traditional discriminant function analysis programs), GDA allows the user to specify simple and complex ANOVA and ANCOVA-like designs, e.g., mixtures of continuous and categorical predictors, polynomial (response surface) designs, factorial designs, nested designs, etc.

2. Multiple-degree of freedom effects in stepwise selection; the terms that make up the predictor set (consisting not only of single-degree of freedom continuous predictors, but also multiple-degree of freedom effects) can be used in stepwise discriminant function analyses; multiple-degree of freedom effects will always be entered/removed as blocks.

3. Best subset selection of predictor effects; single- and multiple-degree of freedom effects can be specified for best-subset discriminant analysis; the program will select the effects (up to a user-specified number of effects) that produce the best discrimination between groups.

4. Selection of predictor effects based on misclassification rates; GDA allows the user to perform model building (selection of predictor effects) not only based on traditional criteria (e.g., p-to-enter/remove; Wilks' lambda), but also based on misclassification rates; in other words the program will select those predictor effects that maximize the accuracy of classification, either for those cases from which the parameter estimates were computed, or for a cross-validation sample (to guard against over-fitting); these techniques elevate GDA to the level of a fast neural-network-like data mining tool for classification, that can be used as an alternative to other similar techniques (tree-classifiers, designated neural-network methods, etc.; GDA will tend to be faster than those techniques because it is still based on the more efficient General Linear Model).

Results statistics; profiling:

1. Detailed results and diagnostic statistics and plots; in addition to the standard results statistics, GDA provides a large number of auxiliary information to help the user judge the adequacy of the chosen disciminant analysis model (descriptive statistics and graphs, Mahalanobis distances, Cook distances, and leverages for predictors, etc.). 2. Profiling of expected classification; GDA includes an adaptation of the general GLM (GRM) response profiler; these options allow the user to quickly determine the values (or levels) of the predictor variables that maximize the posterior classification probability for a single group, or for a set of groups in the analyses; in a sense, the user can quickly determine the typical profiles of values of the predictors (or levels of categorical predictors) that identify a group (or set of groups) in the analysis.

A note of caution for models with categorical predictors, and other advanced techniques. The General Discriminant Analysis module provides functionality that makes this technique a general tool for classification and data mining. However, most — if not all — textbook treatments of discriminant function analysis are limited to simple and stepwise analyses with single degree of freedom continuous predictors. No "experience" (in the literature) exists regarding issues of robustness and effectiveness of these techniques, when they are generalized in the manner provided in this very powerful module. The use of best-subset methods, in particular when used in conjunction with categorical predictors or when using the misclassification rates in a crossvalidation sample for choosing the best subset of predictors, should be considered a heuristic search method, rather than a Statistical analysis technique.

Advanced Linear/Non-Linear Modules

Distributions and Simulation

distributions and simulationDistributions and Simulation enables users to automatically fit a large number of distributions for continuous and categorical variables to lists of variables. Standard distributions are available (normal, halfnormal, log-normal, Weibull, etc.), but also included are specialized and general distributions (Johnson, Gaussian Mixture, Generalized Pareto, Generalized Extreme Value), and Statistica automatically ranks the quality of the fit for each selected distribution and variable.

In addition, the distributions fit to the list of selected variables and the covariance between the selected variables can be saved for deployment. The Distributions & Simulation module uses this deployment information to generate simulated data sets that not only faithfully reproduce the respective distributions, but also the covariances between variables. In short, in addition to facilitating efficient distribution fitting to large numbers of variables, this module enables users to fit general multivariate distributions, and simulate from those distributions, using cutting edge simulation techniques (e.g., Latin-Hypercube simulation).When data are not available for which to fit distributions, the Design Simulation tool allows you to generate data from a correlation matrix and selection of distributions.These methods have proven useful in various domains such as modern DOE, reliability engineering, and risk modeling.

Variance Compnents and Mixed Model ANOVA/ANCOVA

pie chart,variance componentsVariance Components and Mixed Model ANOVA/ANCOVA. is a specialized module for designs with random effects and/or factors with many levels; options for handling random effects and for estimating variance components are also provided in the General Linear Models module. Random effects (factors) occur frequently in industrial research, when the levels of a factor represent values sampled from a random variable (as opposed to being deliberately chosen or arranged by the experimenter). The Variance Components module will allow you to analyze designs with any combinations of fixed effects, random effects, and covariates. Extremely large ANOVA/ANCOVA designs can be efficiently analyzed: Factors can have several hundreds of levels. The program will analyze standard factorial (crossed) designs as well as hierarchically nested designs, and compute the standard Type I, II, and III analysis of variance sums of squares and mean squares for the effects in the model. In addition, you can compute the table of expected mean squares for the effects in the design, the variance components for the random effects in the model, the coefficients for the denominator synthesis, and the complete ANOVA table with tests based on synthesized error sums of squares and degrees of freedom (using Satterthwaite's method). Other methods for estimating variance components are also supported (e.g., MIVQUE0, Maximum Likelihood [ML], Restricted Maximum Likelihood [REML]). For maximum likelihood estimation, both the Newton-Raphson and Fisher scoring algorithms are used, and the model will not be arbitrarily changed (reduced) during estimation to handle situations where most components are at or near zero. Several options for reviewing the weighted and unweighted marginal means, and their confidence intervals, are also available. Extensive graphics options can be used to visualize the results.

Survival Failure Time Analysis

Survival Analysis, Failure, Time AnalysisThis module features a comprehensive implementation of a variety of techniques for analyzing censored data from social, biological, and medical research, as well as procedures used in engineering and marketing (e.g., quality control, reliability estimation, etc.). In addition to computing life tables with various descriptive statistics and Kaplan-Meier product limit estimates, the user can compare the survivorship functions in different groups using a large selection of methods (including the Gehan test, Cox F-test, Cox-Mantel test, Log-rank test, and Peto & Peto generalized Wilcoxon test). Also, Kaplan-Meier plots can be computed for groups (uncensored observations are identified in graphs with different point markers). The program also features a selection of survival function fitting procedures (including the Exponential, Linear Hazard, Gompertz, and Weibull functions) based on either unweighted and weighted least squares methods (maximum-likelihood parameter estimates for various distributions, including Weibull, can also be computed via the Statistica Process Analysis module). Finally, the program offers full implementations of four general explanatory models (Cox's proportional hazard model, exponential regression model, log-normal and normal regression models) with extended diagnostics, including stratified analysis and graphs of survival for user-specified values of predictors. For Cox proportional hazard regression, the user can choose to stratify the sample to permit different baseline hazards in different strata (but a constant coefficient vector), or the user can allow for different baseline hazards as well as coefficient vectors. In addition, general facilities are provided to define one or more time-dependent covariates. Time-dependent covariates can be specified via a flexible formula interpreter that allows the user to define the covariates via arithmetic expressions which may include time, as well as the standard logical functions (e.g., timdep=age+age*log(t_)*(age>45), where t_ references survival time) and a wide variety of distribution functions. As in all other modules of Statistica, the user can access and change the technical parameters of all procedures (or accept dynamic defaults). The module also offers an extensive selection of graphics and specialized diagrams to aid in the interpretation of results (including plots of cumulative proportions surviving/failing, patterns of censored data, hazard and cumulative hazard functions, probability density functions, group comparison plots, distribution fitting plots, various residual plots, and many others).

Cox Proportional Hazards Models

The Cox Proportional Hazards Models module is a highly scalable tool which includes: 

  • analysis of survival data from patients in medical studies
  • customer churn analysis (loss of customer)
  • modeling and failure of mechanical parts (reliability)

Cox Proportional HazardThis tool allows for flexible handling of censored data, categorical predictors, and designs that include interactions and/or nested effects. It uses model building techniques such as best subsets and stepwise regression. Deployment of the survival functions on new data is available with Statistica Rapid Deployment.

 

General Nonlinear Estimation (and Quick Logit/Probit Regression)

Nonlinear EstimationThe Nonlinear Estimation module allows the user to fit essentially any type of nonlinear model. One of the unique features of this module is that (unlike traditional nonlinear estimation programs) it does not impose any limits on the size of data files that it can process.

Estimation Methods

The models can be fit using least squares or maximum-likelihood estimation, or any user-specified loss function. When using the least-squares criterion, the very efficient Levenberg-Marquardt and Gauss-Newton algorithms can be used to estimate the parameters for arbitrary linear and nonlinear regression problems. For large datasets or for difficult nonlinear regression problems (such as those rated "higher difficulty" among the Statistical Reference Datasets provided by the National Institute of Standards and Technology; see http://www.nist.gov/itl/div898/strd/index.html), when using the least-squares criterion, this is the recommended method for computing precise parameter estimates. When using arbitrary loss functions, the user can choose from among four very different, powerful estimation procedures (quasi-Newton, Simplex, Hooke-Jeeves pattern moves, and Rosenbrock pattern search method of rotating coordinates) so that stable parameter estimates can be obtained in practically all cases, and even in extremely numerically-demanding conditions (see the Validation Benchmarks ).

Models

The user can specify any type of model by typing in the respective equation into an equation editor. The equations may include logical operators; thus, discontinuous (piecewise) regression models and models including indicator variables can also be estimated. The equations may also include a wide selection of distribution functions and cumulative distribution functions (Beta, Binomial, Cauchy, Chi-square, Exponential, Extreme value, F, Gamma, Geometric, Laplace, Logistic, Normal, Log-Normal, Pareto, Poisson, Rayleigh, t (Student), or Weibull distribution). The user has full control over all aspects of the estimation procedure (e.g., starting values, step sizes, convergence criteria, etc.). The most common nonlinear regression models are predefined in the Nonlinear Estimation module, and can be chosen simply as menu options. Those regression models include stepwise Probit and Logit regression, the exponential regression model, and linear piecewise (break point) regression. Note that Statistica also includes implementations of powerful algorithms for fitting generalized linear models, including probit and multinomial logit models, and generalized additive models; see the respective descriptions for additional details.

Nonlinear Estimation, Results Results

In addition to various descriptive statistics, standard results of the nonlinear estimation include the parameter estimates and their standard errors (computed independently of the estimation itself, via finite differencing to optimize precision; see the Validation Benchmarks ); the variance/covariance matrix of parameter estimates, the predicted values, residuals, and appropriate measures of goodness-of-fit (e.g., log-likelihood of estimated/null models and Chi-square test of difference, proportion of variance accounted for, classification of cases and odds-ratios for Logit and Probit models, etc.). Predicted and residual values can be appended to the data file for further analyses. For Probit and Logit models, the incremental fit is also automatically computed when adding or deleting parameters from the regression model (thus, the user can explore the data via a stepwise nonlinear estimation procedure; options for automatic forward and backward stepwise regression as well as best-subset selection of predictors in logit and probit models is provided in the Generalized Linear Models module, below).

Graphs

All output is integrated with extensive selections of graphs, including interactively-adjustable 2D and 3D (surface) arbitrary function fitting graphs which allow the user to visualize the quality of the fit and identify outliers or ranges of discrepancy between the model and the data; the user can interactively adjust the equation of the fitted function (as shown in the graph) without re-processing the data and visualize practically all aspects of the nonlinear fitting process). Many other specialized graphs are provided to evaluate the fitting process and visualize the results, such as histograms of all selected variables and residual values, scatterplots of observed versus predicted values and predicted versus residual values, normal and half-normal probability plots of residuals, and many others.

Log-Linear Analysis of Frequency Tables

Log-Linear Analysis,Frequency Tables,Modeling, LogitThis module offers a complete implementation of log-linear modeling procedures for multi-way frequency tables. Note that Statistica also includes the Generalized Linear Models module, which provides options for analyzing binomial and multinomial logit models with coded ANOVA/ANCOVA-like designs. In the Log-Linear Analysis module, the user can analyze up to 7-way tables in a single run. Both complete and incomplete tables (with structural zeros) can be analyzed. Frequency tables can be computed from raw data, or may be entered directly into the program. The Log-Linear Analysis module provides a comprehensive selection of advanced modeling procedures in an interactive and flexible environment that greatly facilitates exploratory and confirmatory analyses of complex tables. The user may at all times review the complete observed table as well as marginal tables, and fitted (expected) values, and may evaluate the fit of all partial and marginal association models or select specific models (marginal tables) to be fitted to the observed data. The program also offers an intelligent automatic model selection procedure that first determines the necessary order of interaction terms required for a model to fit the data, and then, through backwards elimination, determines the best sufficient model to satisfactorily fit the data (using criteria determined by the user). The standard output includes G-square (Maximum-Likelihood Chi-square), the standard Pearson Chi-square with the appropriate degrees of freedom and significance levels, the observed and expected tables, marginal tables, and other statistics. Graphics options available in the Log-linear module include a variety of 2D and 3D graphs designed to visualize 2-way and multi-way frequency tables (including interactive, user-controlled cascades of categorized histograms and 3D histograms revealing "slices" of multi-way tables), plots of observed and fitted frequencies, plots of various residuals (standardized, components of Maximum-Likelihood Chi-square, Freeman-Tukey deviates, etc.), and many others.

Time Series Analysis Forecasting

Time Series Analysis,Forecasting,ARIMA residualsThe Time Series module contains a wide range of descriptive, modeling, decomposition, and forecasting methods for both time and frequency domain models. These procedures are integrated, that is, the results of one analysis (e.g., ARIMA residuals) can be used directly in subsequent analysis (e.g., to compute the autocorrelation of the residuals). Also, numerous flexible options are provided to review and plot single or multiple series. Analyses can be performed on even very long series. Multiple series can be maintained in the active work area

of the program (e.g., multiple raw input data series or series resulting from different stages of the analysis); the series can be reviewed and compared. The program will automatically keep track of successive analyses, and maintain a log of transformations and other results (e.g., ARIMA residuals, seasonal components, etc.). Thus, the user can always return to prior transformations or compare (plot) the original series together with its transformations. Information about the consecutive transformations is maintained in the form of long variable labels, so if you save the newly created variables into a dataset, the "history" of each of the series will be permanently preserved. The specific Time Series procedures are described in the following subsections.

Transformations, Modeling, Plots, Autocorrelations

The available time series transformations allow the user to fully explore patterns in the input series, and to perform all common time series transformations, including: de-trending, removal of autocorrelation, moving average smoothing (unweighted and weighted, with user-defined or Daniell, Tukey, Hamming, Parzen, or Bartlett weights), moving median smoothing, simple exponential smoothing (see also the description of all exponential smoothing options below), differencing, integrating, residualizing, shifting, 4253H smoothing, tapering, Fourier (and inverse) transformations, and others. Autocorrelation, partial autocorrelation, and crosscorrelation analyses can also be performed.

  ARIMA and Interrupted Time Series (Intervention) Analysis

ARIMA,Interrupted Time Series, Intervention Analysis

The Time Series module offers a complete implementation of ARIMA. Models may include a constant, and the series can be transformed prior to the analysis; these transformations will automatically be "undone" when ARIMA forecasts are computed, so that the forecasts and their standard errors are expressed in terms of the values of the original input series. Approximate and exact maximum-likelihood conditional sums of squares can be computed, and the ARIMA implementation in the Time Series module is uniquely suited to fitting models with long seasonal periods (e.g., periods of 30 days). Standard results include the parameter estimates and their standard errors and the parameter correlations. Forecasts and their standard errors can be computed and plotted, and appended to the input series. In addition, numerous options for examining the ARIMA residuals (for model adequacy) are available, including a large selection of graphs. The implementation of ARIMA in the Time Series module also allows the user to perform interrupted time series (intervention) analysis. Several simultaneous interventions may be modeled, which can either be single-parameter abrupt-permanent interventions, or two-parameter gradual or temporary interventions (graphs of different impact patterns can be reviewed). Forecasts can be computed for all intervention models, which can be plotted (together with the input series) as well as appended to the original series.

Seasonal and Non-Seasonal Exponential Smoothing

Seasonal and Non-Seasonal Exponential Smoothing,trend, Holt-Winter,transformationThe Time Series module contains a complete implementation of all 12 common exponential smoothing models. Models can be specified to contain an additive or multiplicative seasonal component and/or linear, exponential, or damped trend; thus, available models include the popular Holt-Winter linear trend models. The user may specify the initial value for the smoothing transformation, initial trend value, and seasonal factors (if appropriate). Separate smoothing parameters can be specified for the trend and seasonal components. The user can also perform a grid search of the parameter space in order to identify the best parameters; the respective results spreadsheet will report for all combinations of parameter values the mean error, mean absolute error, sum of squares error, mean square error, mean percentage error, and mean absolute percentage error. The smallest value for these fit indices will be highlighted in the spreadsheet. In addition, the user can also request an automatic search for the best parameters with regard to the mean square error, mean absolute error, or mean absolute percentage error (a general function minimization procedure is used for this purpose). The results of the respective exponential smoothing transformation, the residuals, as well as the requested number of forecasts, are available for further analyses and plots. A summary plot is also available to assess the adequacy of the respective exponential smoothing model; that plot will show the original series together with the smoothed values and forecasts, as well as the smoothing residuals plotted separately against the right-Y axis.

Census Method I – Classical Seasonal Decomposition

The user may specify the length of the seasonal period, and choose either the additive or multiplicative seasonal model. The program will compute the moving averages, ratios or differences, seasonal factors, the seasonally adjusted series, the smoothed trend-cycle component, and the irregular component. Those components are available for further analysis; for example, the user may compute histograms, normal probability plots, etc. for any or all of these components (e.g., to test model adequacy).

Census Method II – X-11 Monthly and Quarterly Seasonal Decomposition and Seasonal Adjustment

US Bureau of the Census, X-11 variant of the Census Method II, seasonal adjustment procedureThe Time Series module contains a full-featured implementation of the US Bureau of the Census X-11 variant of the Census Method II seasonal adjustment procedure. While the original X-11 algorithms were not year-2000 compatible (only data prior to January 2000 could be analyzed), the Statistica implementation of X11 can handle data containing dates prior to January 1, 2000, after that date, or series that will start prior to that date but terminate in or after the year 2000. The arrangement of options and dialogs closely follows the definitions and conventions described in the Bureau of the Census documentation. Additive and multiplicative seasonal models may be specified. The user may also specify prior trading-day factors and seasonal adjustment factors. Trading-day variation can be estimated via regression (controlling for extreme observations), and used to adjust the series (conditionally if requested). The standard options are provided for graduating extreme observations, for computing the seasonal factors, and for computing the trend-cycle component (the user can choose between various types of weighted moving averages; optimal lengths and types of moving averages can also automatically be chosen by the program). The final components (seasonal, trend-cycle, irregular) and the seasonally adjusted series are automatically available for further analyses and plots; those components can also be saved for further analyses with other programs. The program will produce the plots of the different components, including categorized plots by months (or quarters).

Polynomial Distributed Lag Models

The implementation of the polynomial distributed lag methods in the Time Series module will estimate models with unconstrained lags as well as (constrained) Almon distributed lags models. A selection of graphs are available to examine the distributions of the model variables.

 Spectrum (Fourier) and Cross-Spectrum Analysis

time series,Fourier,Spectrum and Cross-Spectrum AnalysisThe Time Series module includes a full implementation of spectrum (Fourier decomposition) analysis and cross-spectrum analysis techniques. The program is particularly suited for the analysis of unusually long time series (e.g., with over 250,000 observations), and it will not impose any constraints on the length of the series (i.e., the length of input series does not have to be a multiple of 2). However, the user may also choose to pad or truncate the series prior to the analysis. Standard pre-analysis transformations include tapering, subtraction of the mean, and detrending. For single spectrum analysis, the standard results include the frequency, period, sine and cosine coefficients, periodogram values, and spectral density estimates. The density estimates can be computed using Daniell, Hamming, Bartlett, Tukey, Parzen, or user-defined weights and user-defined window widths. An option that is particularly useful for long input series is to display only a user-defined number of the largest periodogram or density values in descending order; thus, the most salient periodogram or density peaks can be easily identified in long series. The user can compute the Kolmogorov-Smirnov d test for the periodogram values to test whether they follow an exponential distribution (i.e., whether the input is a white-noise series). Numerous plots are available to summarize the results; the user can plot the sine and cosine coefficients, periodogram values, log-periodogram values, spectral density values, and log-density values against the frequencies, period, or log-period. For long input series, the user can choose the segment (period) for which to plot the respective periodogram or density values, thus enhancing the "resolution" of the periodogram or density plot. For cross-spectrum analysis, in addition to the single spectrum results for each series, the program computes the cross-periodogram (real and imaginary part), co-spectral density, quadrature spectrum, cross-amplitude, coherency values, gain values, and the phase spectrum. All of these can also be plotted against the frequency, period, or log-period, either for all periods (frequencies) or only for a user-defined segment. A user-defined number of the largest cross-periodogram values (real or imaginary) can also be displayed in a spreadsheet in descending order of magnitude to facilitate the identification of salient peaks when analyzing long input series. As with all other procedures in the Time Series module, all of these result series can be appended to the active work area, and will be available for further analyses with other time series methods or other Statistica modules.

Regression-Based Forecasting Techniques

Finally, Statistica offers regression-based time series techniques for lagged or non-lagged variables (including regression through the origin, nonlinear regression, and interactive what-if forecasting).

Structural Equation Modeling and Path Analysis (SEPATH)

Structural Equation Modeling And Path Analysis, SEPATHStatistica includes a comprehensive implementation of structural equation modeling techniques with flexible Monte Carlo simulation facilities (SEPATH). The module is a state-of-the art program with an "intelligent" user-interface. It offers a comprehensive selection of modeling procedures integrated with unique user-interface tools allowing you to specify even complex models without using any command syntax. Via Wizards and Path Tools, you can define the analysis in simple functional terms using menus and dialog boxes (unlike other programs for structural equation modeling, no complex "language" must be mastered).

SEPATH is a complete implementation that includes numerous advanced features: The program can analyze correlation, covariance, and moment matrices (structured means, models with intercepts); all models can be specified via the Path Wizard, Factor Analysis Wizard, and General Path tools; these facilities are highly efficient and allow users to specify even complex models in minutes by making choices from dialogs. The SEPATH module will compute, using constrained optimization techniques, the appropriate standard errors for standardized models, and for models fitted to correlation matrices. The results options include a comprehensive set of diagnostic statistics including the standard fit indices as well as noncentrality-based indices of fit, reflecting the most recent developments in the area of structural equation modeling. The user may fit models to multiple samples (groups), and can specify for each group fixed, free, or constrained (to be equal across groups) parameters. When analyzing moment matrices, these facilities allow you to test complex hypotheses for structured means in different groups. The SEPATH module documentation contains numerous detailed descriptions of examples from the literature, including examples of confirmatory factor analysis, path analysis, test theory models for congeneric tests, multi-trait-multi-method matrices, longitudinal factor analysis, compound symmetry, structured means, etc.

SEPATH Monte Carlo simulation

Monte Carlo simulationThe Statistica Structural Equation Modeling (SEPATH) module includes powerful simulation options: the user can generate (and save) datasets for predefined models, based on normal or skewed distributions. Bootstrap estimates can be computed, as well as distributions for various diagnostic statistics, parameter estimates, etc. over the Monte Carlo trials. Numerous flexible graphing options are available to visualize the results (e.g., distributions of parameters) from Monte Carlo runs.



 

Analyzing Linear and Nonlinear systems

Statistica includes five powerful types of analyses for analyzing linear and nonlinear models: General Linear Models (GLM), General Regression Models (GRM), General Discriminant Analysis Models (GDA), Generalized Linear Model (GLZ), and General Partial Least Squares Models (PLS). Note that Statistica also includes implementations of Generalized Additive Models (GAM), Classification and Regression Trees (C&RT)and General CHAID (Chi-square Automatic Interaction Detection) available in Statistica Data Miner; these modules can also be used to fit nonlinear (ANOVA/ANCOVA-like) models to continuous or categorical dependent (criterion) variables.

All of these modules are extremely comprehensive and advanced implementations of the respective methods, and all of them share some general user interface solutions.

General Features Common to All Five Modules GLM General Linear Models General Regression Models GRM general discriminant analysis gda generalized linear models glz general partial least squres pls

Three alternative user-interfaces: (1) Quick-specs dialogs, (2) Wizard, and (3) Syntax. All modules offer three alternative user-interfaces for specifying research designs (e.g., ANOVA/ANCOVA designs, regression designs, response surface designs, mixture designs, etc.; see the description of GLM for details): 

  1. Via Quick-specs dialogs which prompt the user the specify the necessary variables, etc., given an initial choice of design (e.g., if you choose a response surface design, you are prompted to specify continuous predictors, and an optional blocking variable),

    GLZ Quick-spec Dialogs

  2. Via unique, powerful Design Wizards, which lead the user step-by-step through the process of specifying a model, and

    GRM Wizard

  3. Via a simple command syntax which offers a choice of either the traditional SAS® language or simpler to use and more flexible VGLM language (both options include "quick entry" dialogs with shortcut buttons and facilities to open syntax files saved in text format).

    GLM Syntax Editor

Automatically generating the syntax statements. One of the unique features of this user-interface is that in the background Statistica will automatically generate the complete set of syntax statements for any design specified via the Quick-specs dialogs (see point 1 above) or the Wizard (see point 2). These "active" logs of even the most complex and customized designs can be re-run, saved for future use, modified, included in Statistica Visual Basic scripts to be routinely run on new datasets, etc. Because the syntax for specifying general linear model designs is shared by all of these modules, it is also easy to move specifications form one type of analysis to another, for example, in order to fit the same model in GLM and GLZ.

Computation (training) sample, cross-validation (verification) sample, and prediction sample. All five modules will compute detailed residual statistics that can be saved for further analyses with other modules. Another unique feature of these programs is that the predicted and residual statistics can be computed separately for those observations from which the respective results were computed (i.e., the computation or training sample), for observations explicitly excluded from the model fitting computations (the cross-validation or verification sample), and for cases without observed data for the dependent (response) variables (prediction sample). Moreover, all graphical results options (e.g., probability plots, histograms, scatterplots of selected predicted or residual statistics) can be requested for these samples. Thus, all five programs offer exceptionally thorough diagnostic methods for evaluating the quality of the fit of the model.

Comparing analyses; modifying analyses. Like all analytic facilities of Statistica, multiple instances of all modules can be kept open at the same time, so multiple analyses can simultaneously be performed on the same or on different datasets. This is extremely useful for comparing the results from different analyses of the same data or the same analyses of different data. Modifying an analysis does not require complete respecification of the analysis; only desired changes need to be specified. Results from different modifications of an analysis can be easily compared. Statistica GLM, GRM, GDA, GLZ, and PLS can take what-if analyses to a new level, by allowing comparisons of different data and different analyses at the same time.

General Linear Models (GLM)

General Linear ModelingStatistica General Linear Models (GLM) analyzes responses on one or more continuous dependent variables as a function of one or more categorical or continuous independent variables. GLM is not only the most computationally advanced GLM tool currently on the market, but it is also the most comprehensive and complete application available, offering a larger selection of options, graphs, accompanying statistics and extended diagnostics than any other program. Designed with a "no compromise approach", GLM offers the most extensive selection of options to handle GLM's so-called "controversial problems" that do not have any widely agreed upon solutions. GLM will compute all the standard results, including ANOVA tables with univariate and multivariate tests, descriptive statistics, etc. GLM offers a large number of results and graphics options that are usually not available in other programs. GLM also offers simple ways to test linear combinations of parameter estimate; specifications of custom error terms and effects; comprehensive post-hoc comparison methods for between group effects as well as repeated measures effects, and the interactions between repeated measures. 

The following sections summarize some of the most important specific advantages of GLM over other programs, and the unique features and facilities offered in this module; however, it is important to start by stressing the fact that GLM is not only the most computationally advanced GLM tool available on the market but it is also the most comprehensive and complete application that offers a wider selection of options, more graphs, more accompanying statistics and extended diagnostics than any other program. It has been designed with a "no compromise approach" to address the most challenging problems in the area of GLM and also to offer the most comprehensive selections of user-selectable options to handle so-called "controversial problems" that do not have any widely agreed upon solutions.

Designs. The user can choose simple or highly customized one-way, main-effect, factorial, or nested ANOVA or MANOVA designs, repeated measures designs, simple, multiple and polynomial regression designs, response surface designs (with or without blocking), mixture surface designs, simple or complex analysis of covariance designs (e.g., with separate slopes), or general multivariate MANCOVA designs. Factors can be fixed or random (in which case synthesized error terms will be computed). All of these designs can be efficiently specified via any of the three types of user interfaces described above, and customized in various ways (e.g., you can drop effects, specify custom hypotheses, etc.). Also, GLM can handle extremely large analysis designs; for example, repeated measures factors with 1000 levels can be specified, models may include 1000 covariates, or you can analyze very efficiently literally huge between-group designs.

The overparameterized and sigma-restricted model. A detailed discussion is beyond the scope of this summary; most programs only offer the overparameterized model, and a few only the sigma restricted model; Statistica GLM is the only program available on the market that offers both. Note that each of the two models has its advantages and disadvantages; however, both approaches are necessary to offer a truly comprehensive GLM computational platform, capable of properly handling even the most advanced and demanding analytic problems. For example, nested designs and separate slope designs are best analyzed using the overparameterized model; the most common way to estimate variance components, and to compute synthesized error terms in mixed model ANOVA is based on the overparameterized model. Factorial designs with large numbers of factors are best analyzed using the sigma restricted model; in short, a simple 2-way interaction of two two-level factors requires only a single column in the design matrix using the sigma restricted parameterization, but 4 columns in the overparameterized model; as a result, analyzing, for example, an 8-way full factorial design with GLM only requires a few seconds.

Handling missing cell designs. Statistica GLM will compute the customary Type I through IV sums of squares for unbalanced and incomplete designs; however, as is widely acknowledged (e.g., Searle, 1987; Milliken & Johnson, 1986), applying these methods to "messy" designs with missing cells in more or less random locations in the design can lead to misleading, and even blatantly nonsensical results. Statistica GLM therefore also offers two additional methods for analyzing missing cell designs: Hockings (1985) "effective hypothesis decomposition," and a method that will automatically drop effects that cannot be fully estimated (e.g., when the least squares means do not exist for all levels of the respective main effect or interaction effect). The latter method is the one commonly applied to the analysis of highly fractionalized designs in industrial experimentation (see also Statistica DOE). This method leads to results that are unique (not dependent on the ordering of factor levels), easily interpretable, and consistent with the industrial experimentation literature. This highly useful feature is unique to GLM.

Results statistics. GLM will compute all the standard results, including ANOVA tables with univariate and multivariate tests, descriptive statistics, etc. GLM also offers a large number of results options and in particular graphics options that are usually not available in other programs. For example, GLM includes a comprehensive selection of types of plots of means (observed, least squares, weighted) for higher-order interactions,

GLM Means Summary

with error bars (standard errors) for effects involving between-group factors as well as repeated measures factors;

GLM: Error Bars

extensive residual analyses and plots (for the "training" or computation sample, for a cross-validation or "verification" sample, or for a prediction sample without observed values for the dependent or response variables), plots of variance components; desirability profiler and response optimization for any model;

GLM: Profiler

and adjusted means for traditional analysis of covariance designs. Extensive and flexible options for specifying planned comparisons are provided including facilities to specify contrasts using either the traditional command syntax or an extremely simple to use (Wizard-style) sequence of "intelligent" contrast dialogs

GLM Contrasts

(you can enter contrast coefficients for clearly labeled levels of factors or cells in the design; the program will then evaluate the comparison for the least squares ("predicted") means, i.e., for the means as predicted by and consistent with the current model; this is a unique solution to the problem of planned comparisons in complex and incomplete designs); simple ways to test linear combinations of parameter estimates (e.g., to test for the equality of specific regression coefficients); specifications of custom error terms and effects; comprehensive post-hoc comparison methods for between group effects as well as repeated measures effects, and the interactions between repeated measures and between effects including: Fisher LSD, Bonferroni, Scheffé, Tukey HSD, Unequal N HSD, Newman Keuls, Duncan, and Dunnett's test

GLM Post Hoc Tests

(with flexible options for estimating the appropriate error terms for those tests), tests of assumptions (e.g., Levene's test, plots of means vs. standard deviations, etc.).

General Regression Models (GRM)

General Regression ModelingStatistica General Regression Models (GRM) provides the user with a unique, highly flexible implementation of the standard and unique results options in the general linear models, as well as including a comprehensive set of stepwise regression and best-subset model building techniques supporting both continuous and categorical variables. Stepwise and best subset methods to build models for highly complex designs can be used in GRM, including designs with effects for categorical predictor variables. Thus, the "general" in General Regression Models refers both to the use of the general linear models, and to the fact that unlike most other stepwise regression programs, GRM is not limited to the analysis of designs that contain only continuous predictor variables. In addition, unique regression-specific results options include Pareto charts of parameter estimates, whole model summaries (tests) with various methods for evaluating no-intercept models, partial and semi-partial correlations, etc.

Stepwise and best-subset selection for continuous and categorical predictors (ANOVA models) for models with multiple dependent variables. GRM is a "sister program" to Statistica General Linear Model (GLM) module. In addition to the large number of unique analytic options available in GLM (including planned comparisons, custom-hypotheses, a wide selection of post-hoc tests, residual analyses options, etc.), the General Regression Models (GRM) module allows you to build models via stepwise and best subset methods. GRM makes these techniques available not only for traditional analytic problems with a single dependent variable, but extends them to analyses of problems with multiple dependent variables; thus, in a sense, GRM can be considered a (very unique) stepwise and best-subset canonical analysis program. These methods can be used with designs that include continuous and/or categorical predictor variables (i.e., ANOVA or ANCOVA designs), and the techniques used in GRM will ensure that multiple degree of freedom effects will be considered (moved in or out of the model) in blocks. Specifically, GRM allows you build models via forward- or backward-only selection (effects can only be entered or removed once during the selection process), standard forward or backward selection (effects can be moved in or out of the model at each step, according to F or p to enter or remove criteria), or via best subset selection; this latter method gives the user flexible options to control the models considered during the subset search (e.g., maximum and minimum subset sizes, Mallow's CP, R-square, and adjusted R-for best subset selection, etc.).

Results. The General Regression Models (GRM) module offers all standard and unique results options described in the context of the GLM module in the previous section (including desirability profiling, predicted and residual statistics for the computation or training sample, cross-validation or verification sample, and prediction sample; tests of assumptions, means plots, etc.). In addition, unique regression-specific results options are also available, including Pareto charts of parameter estimates, whole model summaries (tests) with various methods for evaluating no-intercept models, partial and semi-partial correlations, etc.

Generalized Linear models (GLZ)

generalized linear model, tweedieThe Generalized Linear Models (GLZ) allows the user to search for both linear and nonlinear relationships between a response variable and categorical or continuous predictor variables (including multinomial logit and probit, signal detection models, and many others). Special applications of generalized linear models include a number of widely used types of analyses, such as binomial and multinomial logit and probit regression, Signal Detection Theory (SDT) or Tweedie models.

The Tweedie distribution is actually a family of distributions belonging to the class of exponential dispersion models such that the variance is of the form Var(Y) = φμP, where φ > 0 is the dispersion/scale parameter and μ is the mean. P must be in the interval (-∞, 0] U [1, ∞).

Note that Statistica Data Miner also includes an implementation of Generalized Additive Models, GAM).The GLZ module will compute all standard results statistics, including likelihood ratio tests, and Wald and score tests for significant effects, parameter estimates and their standard errors and confidence intervals, etc. The user-interfaces, methods for specifying designs, and "touch-and-feel" of the program is similar to GLM, GRM, and PLS. The user is able to easily specify ANOVA or ANCOVA-like designs, response surface designs, mixture surface designs, etc.; thus, even novice users will have no difficulty applying generalized linear models to analyze their data. In addition, GLZ includes a comprehensive selection of model checking tools such as spreadsheets and graphs for various residuals and outlier detection statistics, including raw residuals, Pearson residuals, deviance residuals, studentized Pearson residuals, studentized deviance residuals, likelihood residuals, differential Chi-square statistics, differential deviance, and generalized Cook distances, etc.

GLZ Start Up Dialog

 

Models and link functions. A wide range of distributions (from the exponential family) can be specified for the response variable: Normal, Poisson, gamma, binomial, multinomial, ordinal multinomial, and inverse Gaussian. Further, the nature of the relationship between the predictors and the responses can be specified by choosing a so-called link function from a comprehensive list of (common and special-purpose) functions. Available link functions include: log, power, identity, logit, probit, complimentary log-log, and log-log links. Unlike other nonlinear models, these models can be fitted via fast estimation procedures, and allow meaningful interpretations (similar to general linear models), and hence, they are extensively employed in the analysis of non-linear relationships in science as well as applied research.

Stepwise and best-subset selection for continuous and categorical predictors (ANOVA-like models). In addition to the standard model fitting techniques, Statistica GLZ also provides unique options for exploratory analyses, including model building facilities like forward- or backward-only selection of effects (effects can only be selected for inclusion or removal once during the selection process), standard forward or backward stepwise selection of effects (effects can be entered or removed at each step, using a p to enter or remove criterion), and best subset regression methods (using the likelihood score statistic, model likelihood, or Akaike information criterion). These powerful methods can be applied to categorical predictors (ANOVA-like designs; effects will be moved in or out of the model as multiple-parameter blocks) as well as continuous predictors, and will save significant amounts of time when building appropriate models for complex data.

Results. The Generalized Linear Model module will compute all standard results statistics, including likelihood ratio tests, and Wald and score tests for significant effects, parameter estimates and their standard errors and confidence intervals, etc. In addition, for ANOVA-like designs, tables and plots of predicted means (the equivalent of least squares means computed in the general linear model) with their standard errors can be computed, to aid in the interpretation of results. GLZ also includes a comprehensive selection of model checking tools such as Spreadsheets and graphs for various residuals and outlier detection statistics, including raw residuals, Pearson residuals, deviance residuals, studentized Pearson residuals, studentized deviance residuals, likelihood residuals, differential Chi-square statistics, differential deviance, and generalized Cook distances, etc. As described earlier, predicted and residual statistics can be requested for observations that were used for fitting the model, and those that were not (i.e., for the cross-validation sample).

General Partial Least Squares Models (PLS)

Partial Least Squares ModelingPartial Least Squares (PLS) includes a comprehensive selection of algorithms for univariate and multivariate partial least squares problems. PLS will compute all the standard results for a partial least squares analysis; in addition, it offers a large number of results options and in particular graphics options that are usually not available in other implementations; for example, graphs of parameter values as a function of the number of components, two-dimensional plots for all output statistics (parameters, factor loadings, etc.), two-dimensional plots for all residuals statistics, etc. Because PLS offers an identical selection of flexible user interfaces to that of GLM, GRM and GLZ, it is very easy to set up models in one module and quickly analyze the data using the same model in PLS. This unique flexibility allows even novice users to apply these powerful techniques to their analysis problems. The partial least squares method is a powerful data mining technique, particularly well suited for determining a smaller number of dimensions in a large number of predictors and response variables. These methods for analyzing linear systems have become popular only in the last few years; thus, many of the algorithms and statistics are still the subject of ongoing research.

The overparameterized and sigma-restricted model for categorical predictors. Like GLM and GLZ, PLS offers both the overparameterized and sigma restricted parameterization methods for categorical predictors (ANOVA-like models). In partial least squares models, the sigma restricted solution can be particularly useful, because it may produce less complex results (explain more variability with fewer components, made up of design vectors coded in sigma-restricted form).

Algorithms. Statistica PLS implements the two most general algorithms for partial least squares analysis: SIMPLS and NIPALS.

Results. PLS will compute all the standard results for a partial least squares analysis, and also offers a large number of results options and in particular graphics options that are usually not available in other implementations; for example, graphs of parameter values as a function of the number of components, two-dimensional plots for all output statistics (parameters, factor loadings, etc.), two-dimensional plots for all residuals statistics, etc. Also, like GLM, GRM, and GLZ, the Partial Least Squares module offers extensive residual analysis options, and predicted and residual statistics can be requested for observations that were used for fitting the model (the "training" sample), those that were not (i.e., the cross-validation or verification sample), and for cases without observed data on the dependent (response) variables (the prediction sample).

Power Analysis and Interval Estimation Modules

Advantages

Some of the advantages of Statistica Power Analysis and Interval Estimation are:

  • Precise and fast computational routines, which maintain their accuracy across a broad range of parameters
  • Presentation-quality, automatically-scaled graphs of power vs. sample size, power vs. effect size, and power vs. alpha
  • Protocol statements describing calculations in a form that can be transferred directly to a text document
Power Calculation

Power Calculation allows you to calculate Statistical power for a given analysis type (see List of Tests below), and to produce graphs of power as a function of various quantities that affect power in practice, such as effect size, type I error rate, and sample size.

Sample Size Calculation

Sample Size Calculation allows you to calculate, for a given analysis type (see List of Tests below), the sample size required to attain a given level of power, and to generate plots of required sample size as a function of required power, type I error rate, and effect size.

Interval Estimation

Interval Estimation allows you to calculate, for a given analysis type (see List of Tests below), specialized confidence intervals not generally available in general-purpose Statistical packages. These confidence intervals are distinguished in some cases by the fact that they refer to standardized effects, and in others by the fact that they are exact confidence intervals in situations where only approximate techniques have generally been available.

Statistica Power Analysis and Interval Estimation is unique among programs of its type in that it calculates confidence intervals for a number of important Statistical quantities such as standardized effect size (in t-tests and ANOVA), the correlation coefficient, the squared multiple correlation, the sample proportion, and the difference between proportions (either independent or dependent samples).

These capabilities, in turn, may be used to construct confidence intervals on quantities such as power and sample size, allowing the user to utilize the data from one study to construct an exact confidence interval on the sample size required for another study.

Probability Distributions

Probability Distributions allows you to perform a variety of calculations on probability distributions that are of special value in performing power and sample size calculations.

The routines are distinguished by their high level of accuracy, and the wide range of parameter values for which they will perform calculations. The noncentral distributions are also distinguished by the ability to calculate a noncentrality parameter that places a given observation at a given percentage point in the noncentral distribution. The ability to perform this calculation is essential to the technique of "noncentrality interval estimation"

         

These routines, which include the noncentral t, noncentral F, noncentral chi-square, binomial, exact distribution of the correlation coefficient, and the exact distribution of the squared multiple correlation coefficient, are characterized by their ability to solve for an unknown parameter, and for their ability to handle "non-null" cases.

For example, not only can the distribution routine for the Pearson correlation calculate p as a function of r and N for rho=0, it can also perform the calculation for other values of rho. Moreover, it can solve for the exact value of rho that places an observed r at a particular percentage point, for any given N.

List of Tests

Statistica Power Analysis and Interval Estimation calculates power as a function of sample size, effect size, and Type I error rate for the tests listed below:

  • 1-sample t-test
  • 2-sample independent sample t-test
  • 2-sample dependent sample t-test
  • Planned contrasts
  • 1-way ANOVA (fixed and random effects)
  • 2-way ANOVA
  • Chi-square test on a single variance
  • F-test on 2 variances
  • Z-test (or chi-square test) on a single proportion
  • Z-test on 2 independent proportions
  • Mcnemar's test on 2 dependent proportions
  • F-test of significance in multiple regression
  • t-test for significance of a single correlation
  • Z-test for comparing 2 independent correlations
  • Log-rank test in survival analysis
  • Test of equal exponential survival, with accrual period
  • Test of equal exponential survival, with accrual period and dropouts
  • Chi-square test of significance in structural equation modeling
  • Tests of "close fit" in structural equation modeling confirmatory factor analysis
Example Application

Suppose you are planning a 1-Way ANOVA to study the effect of a drug.

Prior to planning the study, you find that there has been a similar study previously. This particular study had 4 groups, with N = 50 subjects per group, and obtained an F-statistic of 15.4.

From this information, as a first step you can (a) gauge the population effect size with an exact confidence interval, and (b) use this information to set a lower bound to appropriate sample size in your study.

Simply enter the data into a convenient dialog, and results are immediately available.

In this case, we discover that a 90% exact confidence interval on the root-mean-square standardized effect (RmsSE) ranges from about .398 to .686. With effects this strong, it is not surprising that the 90% post hoc confidence interval for power ranges from .989 to almost 1. We can use this information to construct a confidence interval on the actual N needed to achieve a power goal (in this case, .90). This confidence interval ranges from 12 to 31. So, based on the information in the study, we are 90% confident that a sample size no greater than 31 would have been adequate to produce a power of .90.

      

 

Turning to our own study, suppose we examine the relationship between power and effect size for a sample size of 31. The first graph shows quite clearly that as long as the effect size for our drug is in the range of the confidence interval for the previous study, our power will be quite high, should the actual effect size for our drug be on the order of .25, power will be inadequate.

 

      

If, on the other hand, we use a sample size comparable to the previous study (i.e., 50 per group) we discover that power will remain quite reasonable, even for effects on the order of .28.

With Statistica Power Analysis and Interval Estimation, this entire analysis runs in just a minute or two.