A monad for Quantile Regression workflows

Introduction

In this document we describe the design and implementation of a (software programming) monad for Quantile Regression workflows specification and execution. The design and implementation are done with Mathematica / Wolfram Language (WL).

What is Quantile Regression? : Assume we have a set of two dimensional points each point being a pair of an independent variable value and a dependent variable value. We want to find a curve that is a function of the independent variable that splits the points in such a way that, say, 30% of the points are above that curve. This is done with Quantile Regression, see [Wk2, CN1, AA2, AA3]. Quantile Regression is a method to estimate the variable relations for all parts of the distribution. (Not just, say, the mean of the relationships found with Least Squares Regression.)

The goal of the monad design is to make the specification of Quantile Regression workflows (relatively) easy, straightforward, by following a certain main scenario and specifying variations over that scenario. Since Quantile Regression is often compared with Least Squares Regression and some type of filtering (like, Moving Average) those functionalities should be included in the monad design scenarios.

The monad is named QRegMon and it is based on the State monad package "StateMonadCodeGenerator.m", [AAp1, AA1] and the Quantile Regression package "QuantileRegression.m", [AAp4, AA2].

The data for this document is read from WL’s repository or created ad-hoc.

The monadic programming design is used as a Software Design Pattern. The QRegMon monad can be also seen as a Domain Specific Language (DSL) for the specification and programming of machine learning classification workflows.

Here is an example of using the QRMon monad over heteroscedastic data::

QRMon-introduction-monad-pipeline-example-table

QRMon-introduction-monad-pipeline-example-table

QRMon-introduction-monad-pipeline-example-echo

QRMon-introduction-monad-pipeline-example-echo

The table above is produced with the package "MonadicTracing.m", [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad QRMon can be seen as a DSL. Because of this the monad pipelines made with QRMon are sometimes called "specifications".

Remark: With "regression quantile" we mean "a curve or function that is computed with Quantile Regression".

Contents description

The document has the following structure.

  • The sections "Package load" and "Data load" obtain the needed code and data.
  • The sections "Design consideration" and "Monad design" provide motivation and design decisions rationale.

  • The sections "QRMon overview" and "Monad elements" provide technical description of the QRMon monad needed to utilize it.

    • (Using a fair amount of examples.)
  • The section "Unit tests" describes the tests used in the development of the QRMon monad.
    • (The random pipelines unit tests are especially interesting.)
  • The section "Future plans" outlines future directions of development.
  • The section "Implementation notes" just says that QRMon’s development process and this document follow the ones of the classifications workflows monad ClCon, [AA6].

Remark: One can read only the sections "Introduction", "Design consideration", "Monad design", and "QRMon overview". That set of sections provide a fairly good, programming language agnostic exposition of the substance and novel ideas of this document.

The table above is produced with the package "MonadicTracing.m", [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad QRMon can be seen as a DSL. Because of this the monad pipelines made with QRMon are sometimes called "specifications".

Remark: With "regression quantile" we mean "a curve or function that is computed with Quantile Regression".

Contents description

The document has the following structure.

  • The sections "Package load" and "Data load" obtain the needed code and data.
  • The sections "Design consideration" and "Monad design" provide motivation and design decisions rationale.

  • The sections "QRMon overview" and "Monad elements" provide technical description of the QRMon monad needed to utilize it.

    • (Using a fair amount of examples.)
  • The section "Unit tests" describes the tests used in the development of the QRMon monad.
    • (The random pipelines unit tests are especially interesting.)
  • The section "Future plans" outlines future directions of development.
  • The section "Implementation notes" just says that QRMon’s development process and this document follow the ones of the classifications workflows monad ClCon, [AA6].

Remark: One can read only the sections "Introduction", "Design consideration", "Monad design", and "QRMon overview". That set of sections provide a fairly good, programming language agnostic exposition of the substance and novel ideas of this document.

Package load

The following commands load the packages [AAp1–AAp6]:

Import["https://raw.githubusercontent.com/antononcube/\
MathematicaForPrediction/master/MonadicProgramming/\
MonadicQuantileRegression.m"]
Import["https://raw.githubusercontent.com/antononcube/\
MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

Data load

In this section we load data that is used in the rest of the document. The time series data is obtained through WL’s repository.

The data summarization and plots are done through QRMon, which in turn uses the function RecordsSummary from the package "MathematicaForPredictionUtilities.m", [AAp6].

Distribution data

The following data is generated to have [heteroscedasticity(https://en.wikipedia.org/wiki/Heteroscedasticity).

distData = 
  Table[{x, 
    Exp[-x^2] + 
     RandomVariate[
      NormalDistribution[0, .15 Sqrt[Abs[1.5 - x]/1.5]]]}, {x, -3, 
    3, .01}];
Length[distData]

(* 601 *)

QRMonUnit[distData]⟹QRMonEchoDataSummary⟹QRMonPlot;
QRMon-distData

QRMon-distData

Temperature time series

tsData = WeatherData[{"Orlando", "USA"}, "Temperature", {{2015, 1, 1}, {2018, 1, 1}, "Day"}]

QRMonUnit[tsData]⟹QRMonEchoDataSummary⟹QRMonDateListPlot;
QRMon-tsData

QRMon-tsData

Financial time series

The following data is typical for financial time series. (Note the differences with the temperature time series.)

finData = TimeSeries[FinancialData["NYSE:GE", {{2014, 1, 1}, {2018, 1, 1}, "Day"}]];

QRMonUnit[finData]⟹QRMonEchoDataSummary⟹QRMonDateListPlot;
QRMon-finData

QRMon-finData

Design considerations

The steps of the main regression workflow addressed in this document follow.

  1. Retrieving data from a data repository.

  2. Optionally, transform the data.

    1. Delete rows with missing fields.

    2. Rescale data along one or both of the axes.

    3. Apply moving average (or median, or map.)

  3. Verify assumptions of the data.

  4. Run a regression algorithm with a certain basis of functions using:

    1. Quantile Regression, or

    2. Least Squares Regression.

  5. Visualize the data and regression functions.

  6. If the regression functions fit is not satisfactory go to step 4.

  7. Utilize the found regression functions to compute:

    1. outliers,

    2. local extrema,

    3. approximation or fitting errors,

    4. conditional density distributions,

    5. time series simulations.

The following flow-chart corresponds to the list of steps above.

Quantile-regression-workflow-extended

Quantile-regression-workflow-extended

In order to address:

  • the introduction of new elements in regression workflows,

  • workflows elements variability, and

  • workflows iterative changes and refining,

it is beneficial to have a DSL for regression workflows. We choose to make such a DSL through a functional programming monad, [Wk1, AA1].

Here is a quote from [Wk1] that fairly well describes why we choose to make a classification workflow monad and hints on the desired properties of such a monad.

[…] The monad represents computations with a sequential structure: a monad defines what it means to chain operations together. This enables the programmer to build pipelines that process data in a series of steps (i.e. a series of actions applied to the data), in which each action is decorated with the additional processing rules provided by the monad. […] Monads allow a programming style where programs are written by putting together highly composable parts, combining in flexible ways the possible actions that can work on a particular type of data. […]

Remark: Note that quote from [Wk1] refers to chained monadic operations as "pipelines". We use the terms "monad pipeline" and "pipeline" below.

Monad design

The monad we consider is designed to speed-up the programming of quantile regression workflows outlined in the previous section. The monad is named QRMon for "Quantile Regression Monad".

We want to be able to construct monad pipelines of the general form:

QRMon-formula-1

QRMon-formula-1

QRMon is based on the State monad, [Wk1, AA1], so the monad pipeline form (1) has the following more specific form:

QRMon-formula-2

QRMon-formula-2

This means that some monad operations will not just change the pipeline value but they will also change the pipeline context.

In the monad pipelines of QRMon we store different objects in the contexts for at least one of the following two reasons.

  1. The object will be needed later on in the pipeline, or

  2. The object is (relatively) hard to compute.

Such objects are transformed data, regression functions, and outliers.

Let us list the desired properties of the monad.

  • Rapid specification of non-trivial quantile regression workflows.

  • The monad works with time series, numerical matrices, and numerical vectors.

  • The pipeline values can be of different types. Most monad functions modify the pipeline value; some modify the context; some just echo results.

  • The monad can do quantile regression with B-Splines bases, quantile regression fit and least squares fit with specified bases of functions.

  • The monad allows of cursory examination and summarization of the data.

  • It is easy to obtain the pipeline value, context, and different context objects for manipulation outside of the monad.

  • It is easy to plot different combinations of data, regression functions, outliers, approximation errors, etc.

The QRMon components and their interactions are fairly simple.

The main QRMon operations implicitly put in the context or utilize from the context the following objects:

  • (time series) data,

  • regression functions,

  • outliers and outlier regression functions.

Note the that the monadic set of types of QRMon pipeline values is fairly heterogenous and certain awareness of "the current pipeline value" is assumed when composing QRMon pipelines.

Obviously, we can put in the context any object through the generic operations of the State monad of the package "StateMonadGenerator.m", [AAp1].

QRMon overview

When using a monad we lift certain data into the "monad space", using monad’s operations we navigate computations in that space, and at some point we take results from it.

With the approach taken in this document the "lifting" into the QRMon monad is done with the function QRMonUnit. Results from the monad can be obtained with the functions QRMonTakeValue, QRMonContext, or with the other QRMon functions with the prefix "QRMonTake" (see below.)

Here is a corresponding diagram of a generic computation with the QRMon monad:

QRMon-pipeline

QRMon-pipeline

Remark: It is a good idea to compare the diagram with formulas (1) and (2).

Let us examine a concrete QRMon pipeline that corresponds to the diagram above. In the following table each pipeline operation is combined together with a short explanation and the context keys after its execution.

Here is the output of the pipeline:

The QRMon functions are separated into four groups:

  • operations,

  • setters and droppers,

  • takers,

  • State Monad generic functions.

An overview of the those functions is given in the tables in next two sub-sections. The next section, "Monad elements", gives details and examples for the usage of the QRMon operations.

Monad functions interaction with the pipeline value and context

The following table gives an overview the interaction of the QRMon monad functions with the pipeline value and context.

QRMon-monad-functions-overview-table

QRMon-monad-functions-overview-table

The following table shows the functions that are function synonyms or short-cuts.

QRMon-monad-functions-shortcuts-table

QRMon-monad-functions-shortcuts-table

State monad functions

Here are the QRMon State Monad functions (generated using the prefix "QRMon", [AAp1, AA1]):

QRMon-StMon-functions-overview-table

QRMon-StMon-functions-overview-table

Monad elements

In this section we show that QRMon has all of the properties listed in the previous section.

The monad head

The monad head is QRMon. Anything wrapped in QRMon can serve as monad’s pipeline value. It is better though to use the constructor QRMonUnit. (Which adheres to the definition in [Wk1].)

QRMon[{{1, 223}, {2, 323}}, <||>]⟹QRMonEchoDataSummary;
The-monad-head-output

The-monad-head-output

Lifting data to the monad

The function lifting the data into the monad QRMon is QRMonUnit.

The lifting to the monad marks the beginning of the monadic pipeline. It can be done with data or without data. Examples follow.

QRMonUnit[distData]⟹QRMonEchoDataSummary;
Lifting-data-to-the-monad-output

Lifting-data-to-the-monad-output

QRMonUnit[]⟹QRMonSetData[distData]⟹QRMonEchoDataSummary;
Lifting-data-to-the-monad-output

Lifting-data-to-the-monad-output

(See the sub-section "Setters, droppers, and takers" for more details of setting and taking values in QRMon contexts.)

Currently the monad can deal with data in the following forms:

  • time series,

  • numerical vectors,

  • numerical matrices of rank two.

When the data lifted to the monad is a numerical vector vec it is assumed that vec has to become the second column of a "time series" matrix; the first column is derived with Range[Length[vec]] .

Generally, WL makes it easy to extract columns datasets order to obtain numerical matrices, so datasets are not currently supported in QRMon.

Quantile regression with B-splines

This computes quantile regression with B-spline basis over 12 regularly spaced knots. (Using Linear Programming algorithms; see [AA2] for details.)

QRMonUnit[distData]⟹
  QRMonQuantileRegression[12]⟹
  QRMonPlot;
Quantile-regression-with-B-splines-output-1

Quantile-regression-with-B-splines-output-1

The monad function QRMonQuantileRegression has the same options as QuantileRegression. (The default value for option Method is different, since using "CLP" is generally faster.)

Options[QRMonQuantileRegression]

(* {InterpolationOrder -> 3, Method -> {LinearProgramming, Method -> "CLP"}} *)

Let us compute regression using a list of particular knots, specified quantiles, and the method "InteriorPoint" (instead of the Linear Programming library CLP):

p =
  QRMonUnit[distData]⟹
   QRMonQuantileRegression[{-3, -2, 1, 0, 1, 1.5, 2.5, 3}, Range[0.1, 0.9, 0.2], Method -> {LinearProgramming, Method -> "InteriorPoint"}]⟹
   QRMonPlot;
Quantile-regression-with-B-splines-output-2

Quantile-regression-with-B-splines-output-2

Remark: As it was mentioned above the function QRMonRegression is a synonym of QRMonQuantileRegression.

The fit functions can be extracted from the monad with QRMonTakeRegressionFunctions, which gives an association of quantiles and pure functions.

ListPlot[# /@ distData[[All, 1]]] & /@ (p⟹QRMonTakeRegressionFunctions)
Quantile-regression-with-B-splines-output-3

Quantile-regression-with-B-splines-output-3

Quantile regression fit and Least squares fit

Instead of using a B-spline basis of functions we can compute a fit with our own basis of functions.

Here is a basis functions:

bFuncs = Table[PDF[NormalDistribution[m, 1], x], {m, Min[distData[[All, 1]]], Max[distData[[All, 1]]], 1}];
Plot[bFuncs, {x, Min[distData[[All, 1]]], Max[distData[[All, 1]]]}, 
 PlotRange -> All, PlotTheme -> "Scientific"]
Quantile-regression-fit-and-Least-squares-fit-basis

Quantile-regression-fit-and-Least-squares-fit-basis

Here we do a Quantile Regression fit, a Least Squares fit, and plot the results:

p =
  QRMonUnit[distData]⟹
   QRMonQuantileRegressionFit[bFuncs]⟹
   QRMonLeastSquaresFit[bFuncs]⟹
   QRMonPlot;
   
Quantile-regression-fit-and-Least-squares-fit-output-1

Quantile-regression-fit-and-Least-squares-fit-output-1

Remark: The functions "QRMon*Fit" should generally have a second argument for the symbol of the basis functions independent variable. Often that symbol can be omitted and implied. (Which can be seen in the pipeline above.)

Remark: As it was mentioned above the function QRMonRegressionFit is a synonym of QRMonQuantileRegressionFit and QRMonFit is a synonym of QRMonLeastSquaresFit.

As it was pointed out in the previous sub-section, the fit functions can be extracted from the monad with QRMonTakeRegressionFunctions. Here the keys of the returned/taken association consist of quantiles and "mean" since we applied both Quantile Regression and Least Squares Regression.

ListPlot[# /@ distData[[All, 1]]] & /@ (p⟹QRMonTakeRegressionFunctions)
Quantile-regression-fit-and-Least-squares-fit-output-2

Quantile-regression-fit-and-Least-squares-fit-output-2

Default basis to fit (using Chebyshev polynomials)

One of the main advantages of using the function QuanileRegression of the package [AAp4] is that the functions used to do the regression with are specified with a few numeric parameters. (Most often only the number of knots is sufficient.) This is achieved by using a basis of B-spline functions of a certain interpolation order.

We want similar behaviour for Quantile Regression fitting we need to select a certain well known basis with certain desirable properties. Such basis is given by Chebyshev polynomials of first kind [Wk3]. Chebyshev polynomials bases can be easily generated in Mathematica with the functions ChebyshevT or ChebyshevU.

Here is an application of fitting with a basis of 12 Chebyshev polynomials of first kind:

QRMonUnit[distData]⟹
  QRMonQuantileRegressionFit[12]⟹
  QRMonLeastSquaresFit[12]⟹
  QRMonPlot;
Default-basis-to-fit-output-1-and-2

Default-basis-to-fit-output-1-and-2

The code above is equivalent to the following code:

bfuncs = Table[ChebyshevT[i, Rescale[x, MinMax[distData[[All, 1]]], {-0.95, 0.95}]], {i, 0, 12}];

p =
  QRMonUnit[distData]⟹
   QRMonQuantileRegressionFit[bfuncs]⟹
   QRMonLeastSquaresFit[bfuncs]⟹
   QRMonPlot;
Default-basis-to-fit-output-1-and-2

Default-basis-to-fit-output-1-and-2

The shrinking of the ChebyshevT domain seen in the definitions of bfuncs is done in order to prevent approximation error effects at the ends of the data domain. The following code uses the ChebyshevT domain { − 1, 1} instead of the domain { − 0.95, 0.95} used above.

QRMonUnit[distData]⟹
  QRMonQuantileRegressionFit[{4, {-1, 1}}]⟹
  QRMonPlot;
Default-basis-to-fit-output-3

Default-basis-to-fit-output-3

Regression functions evaluation

The computed quantile and least squares regression functions can be evaluated with the monad function QRMonEvaluate.

Evaluation for a given value of the independent variable:

p⟹QRMonEvaluate[0.12]⟹QRMonTakeValue

(* <|0.25 -> 0.930402, 0.5 -> 1.01411, 0.75 -> 1.08075, "mean" -> 0.996963|> *)

Evaluation for a vector of values:

p⟹QRMonEvaluate[Range[-1, 1, 0.5]]⟹QRMonTakeValue

(* <|0.25 -> {0.258241, 0.677461, 0.943299, 0.703812, 0.293741}, 
     0.5 -> {0.350025, 0.768617, 1.02311, 0.807879, 0.374545}, 
     0.75 -> {0.499338, 0.912183, 1.10325, 0.856729, 0.431227}, 
     "mean" -> {0.355353, 0.776006, 1.01118, 0.783304, 0.363172}|> *)

Evaluation for complicated lists of numbers:

p⟹QRMonEvaluate[{0, 1, {1.5, 1.4}}]⟹QRMonTakeValue

(* <|0.25 -> {0.943299, 0.293741, {0.0762883, 0.10759}}, 
     0.5 -> {1.02311, 0.374545, {0.103386, 0.139142}}, 
     0.75 -> {1.10325, 0.431227, {0.133755, 0.177161}}, 
     "mean" -> {1.01118, 0.363172, {0.107989, 0.142021}}|> *)
   

The obtained values can be used to compute estimates of the distributions of the dependent variable. See the sub-sections "Estimating conditional distributions" and "Dependent variable simulation".

Errors and error plots

Here with "errors" we mean the differences between data’s dependent variable values and the corresponding values calculated with the fitted regression curves.

In the pipeline below we compute couple of regression quantiles, plot them together with the data, we plot the errors, compute the errors, and summarize them.

QRMonUnit[finData]⟹
  QRMonQuantileRegression[10, {0.5, 0.1}]⟹
  QRMonDateListPlot[Joined -> False]⟹
  QRMonErrorPlots["DateListPlot" -> True, Joined -> False]⟹
  QRMonErrors⟹
  QRMonEchoFunctionValue["Errors summary:", RecordsSummary[#[[All, 2]]] & /@ # &];
Errors-and-error-plots-output-1

Errors-and-error-plots-output-1

Each of the functions QRMonErrors and QRMonErrorPlots computes the errors. (That computation is considered cheap.)

Finding outliers

Finding outliers can be done with the function QRMonOultiers. The outliers found by QRMonOutliers are simply points that below or above certain regression quantile curves, for example, the ones corresponding to 0.02 and 0.98.

Here is an example:

p =
  QRMonUnit[distData]⟹
   QRMonQuantileRegression[6, {0.02, 0.98}]⟹
   QRMonOutliers⟹
   QRMonEchoValue⟹
   QRMonOutliersPlot;
Finding-outliers-output-1

Finding-outliers-output-1

The function QRMonOutliers puts in the context values for the keys "outliers" and "outlierRegressionFunctions". The former is for the found outliers, the latter is for the functions corresponding to the used regression quantiles.

Keys[p⟹QRMonTakeContext]

(* {"data", "regressionFunctions", "outliers", "outlierRegressionFunctions"} *)

Here are the corresponding quantiles of the plot above:

Keys[p⟹QRMonTakeOutlierRegressionFunctions]

(* {0.02, 0.98} *)

The control of the outliers computation is done though the arguments and options of QRMonQuantileRegression (or the rest of the regression calculation functions.)

If only one regression quantile is found in the context and the corresponding quantile is less than 0.5 then QRMonOutliers finds only bottom outliers. If only one regression quantile is found in the context and the corresponding quantile is greater than 0.5 then QRMonOutliers finds only top outliers.

Here is an example for finding only the top outliers:

QRMonUnit[finData]⟹
  QRMonQuantileRegression[5, 0.8]⟹
  QRMonOutliers⟹
  QRMonEchoFunctionContext["outlier quantiles:", Keys[#outlierRegressionFunctions] &]⟹
  QRMonOutliersPlot["DateListPlot" -> True];
  
Finding-outliers-output-2

Finding-outliers-output-2

Plotting outliers

The function QRMonOutliersPlot makes an outliers plot. If the outliers are not in the context then QRMonOutliersPlot calls QRMonOutliers first.

Here are the options of QRMonOutliersPlot:

Options[QRMonOutliersPlot]

(* {"Echo" -> True, "DateListPlot" -> False, ListPlot -> {Joined -> False}, Plot -> {}} *)

The default behavior is to echo the plot. That can be suppressed with the option "Echo".

QRMonOutliersPlot utilizes combines with Show two plots:

  • one with ListPlot (or DateListPlot) for the data and the outliers,

  • the other with Plot for the regression quantiles used to find the outliers.

That is why separate lists of options can be given to manipulate those two plots. The option DateListPlot can be used make plots with date or time axes.

QRMonUnit[tsData]⟹
 QRMonQuantileRegression[12, {0.01, 0.99}]⟹
 QRMonOutliersPlot[
  "Echo" -> False,
  "DateListPlot" -> True,
  ListPlot -> {PlotStyle -> {Green, {PointSize[0.02], 
       Red}, {PointSize[0.02], Blue}}, Joined -> False, 
    PlotTheme -> "Grid"},
  Plot -> {PlotStyle -> Orange}]⟹
 QRMonTakeValue
 
Plotting-outliers-output-2

Plotting-outliers-output-2

Estimating conditional distributions

Consider the following problem:

How to estimate the conditional density of the dependent variable given a value of the conditioning independent variable?

(In other words, find the distribution of the y-values for a given, fixed x-value.)

The solution of this problem using Quantile Regression is discussed in detail in [PG1] and [AA4].

Finding a solution for this problem can be seen as a primary motivation to develop Quantile Regression algorithms.

The following pipeline (i) computes and plots a set of five regression quantiles and (ii) then using the found regression quantiles computes and plots the conditional distributions for two focus points (−2 and 1.)

QRMonUnit[distData]⟹
  QRMonQuantileRegression[6, 
   Range[0.1, 0.9, 0.2]]⟹
  QRMonPlot[GridLines -> {{-2, 1}, None}]⟹
  QRMonConditionalCDF[{-2, 1}]⟹
  QRMonConditionalCDFPlot;
Estimating-conditional-distributions-output-1

Estimating-conditional-distributions-output-1

Moving average, moving median, and moving map

Fairly often it is a good idea for a given time series to apply filter functions like Moving Average or Moving Median. We might want to:

  • visualize the obtained transformed data,

  • do regression over the transformed data,

  • compare with regression curves over the original data.

For these reasons QRMon has the functions QRMonMovingAverage, QRMonMovingMedian, and QRMonMovingMap that correspond to the built-in functions MovingAverage, MovingMedian, and MovingMap.

Here is an example:

QRMonUnit[tsData]⟹
  QRMonDateListPlot[ImageSize -> Small]⟹
  QRMonMovingAverage[20]⟹
  QRMonEchoFunctionValue["Moving avg: ", DateListPlot[#, ImageSize -> Small] &]⟹
  QRMonMovingMap[Mean, Quantity[20, "Days"]]⟹
  QRMonEchoFunctionValue["Moving map: ", DateListPlot[#, ImageSize -> Small] &];
Moving-average-moving-median-and-moving-map-output-1

Moving-average-moving-median-and-moving-map-output-1

Dependent variable simulation

Consider the problem of making a time series that is a simulation of a process given with a known time series.

More formally,

  • we are given a time-axis grid (regular or irregular),

  • we consider each grid node to correspond to a random variable,

  • we want to generate time series based on the empirical CDF’s of the random variables that correspond to the grid nodes.

The formulation of the problem hints to an (almost) straightforward implementation using Quantile Regression.

p = QRMonUnit[tsData]⟹QRMonQuantileRegression[30, Join[{0.01}, Range[0.1, 0.9, 0.1], {0.99}]];

tsNew =
  p⟹
   QRMonSimulate[1000]⟹
   QRMonTakeValue;

opts = {ImageSize -> Medium, PlotTheme -> "Detailed"};
GraphicsGrid[{{DateListPlot[tsData, PlotLabel -> "Actual", opts],
    DateListPlot[tsNew, PlotLabel -> "Simulated", opts]}}]
Dependent-variable-simulation-output-1

Dependent-variable-simulation-output-1

Finding local extrema in noisy data

Using regression fitting — and Quantile Regression in particular — we can easily construct semi-symbolic algorithms for finding local extrema in noisy time series data; see [AA5]. The QRMon function with such an algorithm is QRMonLocalExtrema.

In brief, the algorithm steps are as follows. (For more details see [AA5].)

  1. Fit a polynomial through the data.

  2. Find the local extrema of the fitted polynomial. (We will call them fit estimated extrema.)

  3. Around each of the fit estimated extrema find the most extreme point in the data by a nearest neighbors search (by using Nearest).

The function QRMonLocalExtrema uses the regression quantiles previously found in the monad pipeline (and stored in the context.) The bottom regression quantile is used for finding local minima, the top regression quantile is used for finding the local maxima.

An example of finding local extrema follows.

QRMonUnit[TimeSeriesWindow[tsData, {{2015, 1, 1}, {2018, 12, 31}}]]⟹
  QRMonQuantileRegression[10, {0.05, 0.95}]⟹
  QRMonDateListPlot[Joined -> False, PlotTheme -> "Scientific"]⟹
  QRMonLocalExtrema["NumberOfProximityPoints" -> 100]⟹
  QRMonEchoValue⟹
  QRMonAddToContext⟹
  QRMonEchoFunctionContext[
   DateListPlot[{#localMinima, #localMaxima, #data}, 
     PlotStyle -> {PointSize[0.015], PointSize[0.015], Gray}, 
     Joined -> False, 
     PlotLegends -> {"localMinima", "localMaxima", "data"}, 
     PlotTheme -> "Scientific"] &];
Finding-local-extrema-in-noisy-data-output-1

Finding-local-extrema-in-noisy-data-output-1

Note that in the pipeline above in order to plot the data and local extrema together some additional steps are needed. The result of QRMonLocalExtrema becomes the pipeline value; that pipeline value is displayed with QRMonEchoValue, and stored in the context with QRMonAddToContext. If the pipeline value is an association — which is the case here — the monad function QRMonAddToContext joins that association with the context association. In this case this means that we will have key-value elements in the context for "localMinima" and "localMaxima". The date list plot at the end of the pipeline uses values of those context keys (together with the value for "data".)

Setters, droppers, and takers

The values from the monad context can be set, obtained, or dropped with the corresponding "setter", "dropper", and "taker" functions as summarized in a previous section.

For example:

p = QRMonUnit[distData]⟹QRMonQuantileRegressionFit[2];

p⟹QRMonTakeRegressionFunctions

(* <|0.25 -> (0.0191185 + 0.00669159 #1 + 3.05509*10^-14 #1^2 &), 
     0.5 -> (0.191408 + 9.4728*10^-14 #1 + 3.02272*10^-14 #1^2 &), 
     0.75 -> (0.563422 + 3.8079*10^-11 #1 + 7.63637*10^-14 #1^2 &)|> *)
     

If other values are put in the context they can be obtained through the (generic) function QRMonTakeContext, [AAp1]:

p = QRMonUnit[RandomReal[1, {2, 2}]]⟹QRMonAddToContext["data"];

(p⟹QRMonTakeContext)["data"]

(* {{0.608789, 0.741599}, {0.877074, 0.861554}} *)

Another generic function from [AAp1] is QRMonTakeValue (used many times above.)

Here is an example of the "data dropper" QRMonDropData:

p⟹QRMonDropData⟹QRMonTakeContext

(* <||> *)

(The "droppers" simply use the state monad function QRMonDropFromContext, [AAp1]. For example, QRMonDropData is equivalent to QRMonDropFromContext["data"].)

Unit tests

The development of QRMon was done with two types of unit tests: (i) directly specified tests, [AAp7], and (ii) tests based on randomly generated pipelines, [AA8].

The unit test package should be further extended in order to provide better coverage of the functionalities and illustrate — and postulate — pipeline behavior.

Directly specified tests

Here we run the unit tests file "MonadicQuantileRegression-Unit-Tests.wlt", [AAp7]:

AbsoluteTiming[
 testObject = TestReport["~/MathematicaForPrediction/UnitTests/MonadicQuantileRegression-Unit-Tests.wlt"]
]
Unit-tests-output-1

Unit-tests-output-1

The natural language derived test ID’s should give a fairly good idea of the functionalities covered in [AAp3].

Values[Map[#["TestID"] &, testObject["TestResults"]]]

(* {"LoadPackage", "GenerateData", "QuantileRegression-1", \
"QuantileRegression-2", "QuantileRegression-3", \
"QuantileRegression-and-Fit-1", "Fit-and-QuantileRegression-1", \
"QuantileRegressionFit-and-Fit-1", "Fit-and-QuantileRegressionFit-1", \
"Outliers-1", "Outliers-2", "GridSequence-1", "BandsSequence-1", \
"ConditionalCDF-1", "Evaluate-1", "Evaluate-2", "Evaluate-3", \
"Simulate-1", "Simulate-2", "Simulate-3"} *)

Random pipelines tests

Since the monad QRMon is a DSL it is natural to test it with a large number of randomly generated "sentences" of that DSL. For the QRMon DSL the sentences are QRMon pipelines. The package "MonadicQuantileRegressionRandomPipelinesUnitTests.m", [AAp8], has functions for generation of QRMon random pipelines and running them as verification tests. A short example follows.

Generate pipelines:

SeedRandom[234]
pipelines = MakeQRMonRandomPipelines[100];
Length[pipelines]

(* 100 *)

Here is a sample of the generated pipelines:

(* 
Block[{DoubleLongRightArrow, pipelines = RandomSample[pipelines, 6]}, 
 Clear[DoubleLongRightArrow];
 pipelines = pipelines /. {_TemporalData -> "tsData", _?MatrixQ -> "distData"};
 GridTableForm[Map[List@ToString[DoubleLongRightArrow @@ #, FormatType -> StandardForm] &, pipelines], TableHeadings -> {"pipeline"}]
 ]
AutoCollapse[] *)
Unit-tests-random-pipelines-sample

Unit-tests-random-pipelines-sample

Here we run the pipelines as unit tests:

AbsoluteTiming[
 res = TestRunQRMonPipelines[pipelines, "Echo" -> False];
]

From the test report results we see that a dozen tests failed with messages, all of the rest passed.

rpTRObj = TestReport[res]

(The message failures, of course, have to be examined — some bugs were found in that way. Currently the actual test messages are expected.)

Future plans

Workflow operations

A list of possible, additional workflow operations and improvements follows.

  • Certain improvements can be done over the specification of the different plot options.

  • It will be useful to develop a function for automatic finding of over-fitting parameters.

  • The time series simulation should be done by aggregation of similar time intervals.

    • For example, for time series with span several years, for each month name is made Quantile Regression simulation and the results are spliced to obtain a one year simulation.
  • If the time series is represented as a sequence of categorical values, then the time series simulation can use Bayesian probabilities derived from sub-sequences.
    • QRMon already has functions that facilitate that, QRMonGridSequence and QRMonBandsSequence.

Conversational agent

Using the packages [AAp10, AAp11] we can generate QRMon pipelines with natural commands. The plan is to develop and document those functionalities further.

Here is an example of a pipeline constructed with natural language commands:

QRMonUnit[distData]⟹
  ToQRMonPipelineFunction["show data summary"]⟹
  ToQRMonPipelineFunction["calculate quantile regression for quantiles 0.2, 0.8 and with 40 knots"]⟹
  ToQRMonPipelineFunction["plot"];
Future-plans-conversational-agent-output-1

Future-plans-conversational-agent-output-1

Implementation notes

The implementation methodology of the QRMon monad packages [AAp3, AAp8] followed the methodology created for the ClCon monad package [AAp9, AA6]. Similarly, this document closely follows the structure and exposition of the ClCon monad document "A monad for classification workflows", [AA6].

A lot of the functionalities and signatures of QRMon were designed and programed through considerations of natural language commands specifications given to a specialized conversational agent. (As discussed in the previous section.)

References

Packages

[AAp1] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

[AAp2] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

[AAp3] Anton Antonov, Monadic Quantile Regression Mathematica package, (2018), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicQuantileRegression.m.

[AAp4] Anton Antonov, Quantile regression Mathematica package, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/QuantileRegression.m .

[AAp5] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

[AAp6] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

[AAp7] Anton Antonov, Monadic Quantile Regression unit tests, (2018), MathematicaVsR at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicQuantileRegression-Unit-Tests.wlt .

[AAp8] Anton Antonov, Monadic Quantile Regression random pipelines Mathematica unit tests, (2018), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicQuantileRegressionRandomPipelinesUnitTests.m .

[AAp9] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

ConverationalAgents Packages

[AAp10] Anton Antonov, Time series workflows grammar in EBNF, (2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

[AAp11] Anton Antonov, QRMon translator Mathematica package,(2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

MathematicaForPrediction articles

[AA1] Anton Antonov, "Monad code generation and extension", (2017), MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction.

[AA2] Anton Antonov, "Quantile regression through linear programming", (2013), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2013/12/16/quantile-regression-through-linear-programming/ .

[AA3] Anton Antonov, "Quantile regression with B-splines", (2014), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2014/01/01/quantile-regression-with-b-splines/ .

[AA4] Anton Antonov, "Estimation of conditional density distributions", (2014), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2014/01/13/estimation-of-conditional-density-distributions/ .

[AA5] Anton Antonov, "Finding local extrema in noisy data using Quantile Regression", (2015), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2015/09/27/finding-local-extrema-in-noisy-data-using-quantile-regression/ .

[AA6] Anton Antonov, "A monad for classification workflows", (2018), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2018/05/15/a-monad-for-classification-workflows/ .

Other

[Wk1] Wikipedia entry, Monad, URL: https://en.wikipedia.org/wiki/Monad_(functional_programming) .

[Wk2] Wikipedia entry, Quantile Regression, URL: https://en.wikipedia.org/wiki/Quantile_regression .

[Wk3] Wikipedia entry, Chebyshev polynomials, URL: https://en.wikipedia.org/wiki/Chebyshev_polynomials .

[CN1] Brian S. Code and Barry R. Noon, "A gentle introduction to quantile regression for ecologists", (2003). Frontiers in Ecology and the Environment. 1 (8): 412[Dash]420. doi:10.2307/3868138. URL: http://www.econ.uiuc.edu/~roger/research/rq/QReco.pdf .

[PS1] Patrick Scheibe, Mathematica (Wolfram Language) support for IntelliJ IDEA, (2013-2018), Mathematica-IntelliJ-Plugin at GitHub. URL: https://github.com/halirutan/Mathematica-IntelliJ-Plugin .

[RG1] Roger Koenker, Quantile Regression, ‪Cambridge University Press, 2005‬.

Advertisements

A monad for classification workflows

Introduction

In this document we describe the design and implementation of a (software programming) monad for classification workflows specification and execution. The design and implementation are done with Mathematica / Wolfram Language (WL).

The goal of the monad design is to make the specification of classification workflows (relatively) easy, straightforward, by following a certain main scenario and specifying variations over that scenario.

The monad is named ClCon and it is based on the State monad package "StateMonadCodeGenerator.m", [AAp1, AA1], the classifier ensembles package "ClassifierEnsembles.m", [AAp4, AA2], and the package for Receiver Operating Characteristic (ROC) functions calculation and plotting "ROCFunctions.m", [AAp5, AA2, Wk2].

The data for this document is read from WL’s repository using the package "GetMachineLearningDataset.m", [AAp10].

The monadic programming design is used as a Software Design Pattern. The ClCon monad can be also seen as a Domain Specific Language (DSL) for the specification and programming of machine learning classification workflows.

Here is an example of using the ClCon monad over the Titanic data:

"ClCon-simple-dsTitanic-pipeline"

"ClCon-simple-dsTitanic-pipeline"

The table above is produced with the package "MonadicTracing.m", [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad ClCon can be seen as a DSL. Because of this the monad pipelines made with ClCon are sometimes called "specifications".

Contents description

The document has the following structure.

  • The sections "Package load" and "Data load" obtain the needed code and data.
    (Needed and put upfront from the "Reproducible research" point of view.)

  • The sections "Design consideration" and "Monad design" provide motivation and design decisions rationale.

  • The sections "ClCon overview" and "Monad elements" provide technical description of the ClCon monad needed to utilize it.
    (Using a fair amount of examples.)

  • The section "Example use cases" gives several more elaborated examples of ClCon that have "real life" flavor.
    (But still didactic and concise enough.)

  • The section "Unit test" describes the tests used in the development of the ClCon monad.
    (The random pipelines unit tests are especially interesting.)

  • The section "Future plans" outlines future directions of development.
    (The most interesting and important one is the "conversational agent" direction.)

  • The section "Implementation notes" has (i) a diagram outlining the ClCon development process, and (ii) a list of observations and morals.
    (Some fairly obvious, but deemed fairly significant and hence stated explicitly.)

Remark: One can read only the sections "Introduction", "Design consideration", "Monad design", and "ClCon overview". That set of sections provide a fairly good, programming language agnostic exposition of the substance and novel ideas of this document.

Package load

The following commands load the packages [AAp1–AAp10, AAp12]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicContextualClassification.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaVsR/master/Projects/ProgressiveMachineLearning/Mathematica/GetMachineLearningDataset.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/UnitTests/MonadicContextualClassificationRandomPipelinesUnitTests.m"]

(*
Importing from GitHub: MathematicaForPredictionUtilities.m
Importing from GitHub: MosaicPlot.m
Importing from GitHub: CrossTabulate.m
Importing from GitHub: StateMonadCodeGenerator.m
Importing from GitHub: ClassifierEnsembles.m
Importing from GitHub: ROCFunctions.m
Importing from GitHub: VariableImportanceByClassifiers.m
Importing from GitHub: SSparseMatrix.m
Importing from GitHub: OutlierIdentifiers.m
*)

Data load

In this section we load data that is used in the rest of the document. The "quick" data is created in order to specify quick, illustrative computations.

Remark: In all datasets the classification labels are in the last column.

The summarization of the data is done through ClCon, which in turn uses the function RecordsSummary from the package "MathematicaForPredictionUtilities.m", [AAp7].

WL resources data

The following commands produce datasets using the package [AAp10] (that utilizes ExampleData):

dsTitanic = GetMachineLearningDataset["Titanic"];
dsMushroom = GetMachineLearningDataset["Mushroom"];
dsWineQuality = GetMachineLearningDataset["WineQuality"];

Here is are the dimensions of the datasets:

Dataset[Dataset[Map[Prepend[Dimensions[ToExpression[#]], #] &, {"dsTitanic", "dsMushroom", "dsWineQuality"}]][All, AssociationThread[{"name", "rows", "columns"}, #] &]]
"ClCon-datasets-dimensions"

"ClCon-datasets-dimensions"

Here is the summary of dsTitanic:

ClConUnit[dsTitanic]⟹ClConSummarizeData["MaxTallies" -> 12];
"ClCon-dsTitanic-summary"

"ClCon-dsTitanic-summary"

Here is the summary of dsMushroom in long form:

ClConUnit[dsMushroom]⟹ClConSummarizeDataLongForm["MaxTallies" -> 12];
"ClCon-dsMushroom-summary"

"ClCon-dsMushroom-summary"

Here is the summary of dsWineQuality in long form:

ClConUnit[dsWineQuality]⟹ClConSummarizeDataLongForm["MaxTallies" -> 12];
"ClCon-dsWineQuality-summary"

"ClCon-dsWineQuality-summary"

"Quick" data

In this subsection we make up some data that is used for illustrative purposes.

SeedRandom[212]
dsData = RandomInteger[{0, 1000}, {100}];
dsData = Dataset[
   Transpose[{dsData, Mod[dsData, 3], Last@*IntegerDigits /@ dsData, ToString[Mod[#, 3]] & /@ dsData}]];
dsData = Dataset[dsData[All, AssociationThread[{"number", "feature1", "feature2", "label"}, #] &]];
Dimensions[dsData]

(* {100, 4} *)

Here is a sample of the data:

RandomSample[dsData, 6]
"ClCon-quick-data-sample"

"ClCon-quick-data-sample"

Here is a summary of the data:

ClConUnit[dsData]⟹ClConSummarizeData;
"ClCon-quick-data-summary-ds"

"ClCon-quick-data-summary-ds"

Here we convert the data into a list of record-label rules (and show the summary):

mlrData = ClConToNormalClassifierData[dsData];
ClConUnit[mlrData]⟹ClConSummarizeData;
"ClCon-quick-data-summary-mlr"

"ClCon-quick-data-summary-mlr"

Finally, we make the array version of the dataset:

arrData = Normal[dsData[All, Values]];

Design considerations

The steps of the main classification workflow addressed in this document follow.

  1. Retrieving data from a data repository.

  2. Optionally, transform the data.

  3. Split data into training and test parts.

    • Optionally, split training data into training and validation parts.
  4. Make a classifier with the training data.

  5. Test the classifier over the test data.

    • Computation of different measures including ROC.

The following diagram shows the steps.

"Classification-workflow-horizontal-layout"

Very often the workflow above is too simple in real situations. Often when making "real world" classifiers we have to experiment with different transformations, different classifier algorithms, and parameters for both transformations and classifiers. Examine the following mind-map that outlines the activities in making competition classifiers.

"Making-competitions-classifiers-mind-map.png"

In view of the mind-map above we can come up with the following flow-chart that is an elaboration on the main, simple workflow flow-chart.

"Classification-workflow-extended.jpg"

In order to address:

  • the introduction of new elements in classification workflows,

  • workflows elements variability, and

  • workflows iterative changes and refining,

it is beneficial to have a DSL for classification workflows. We choose to make such a DSL through a functional programming monad, [Wk1, AA1].

Here is a quote from [Wk1] that fairly well describes why we choose to make a classification workflow monad and hints on the desired properties of such a monad.

[…] The monad represents computations with a sequential structure: a monad defines what it means to chain operations together. This enables the programmer to build pipelines that process data in a series of steps (i.e. a series of actions applied to the data), in which each action is decorated with the additional processing rules provided by the monad. […]

Monads allow a programming style where programs are written by putting together highly composable parts, combining in flexible ways the possible actions that can work on a particular type of data. […]

Remark: Note that quote from [Wk1] refers to chained monadic operations as "pipelines". We use the terms "monad pipeline" and "pipeline" below.

Monad design

The monad we consider is designed to speed-up the programming of classification workflows outlined in the previous section. The monad is named ClCon for "Classification with Context".

We want to be able to construct monad pipelines of the general form:

"ClCon-generic-monad-formula"

"ClCon-generic-monad-formula"

ClCon is based on the State monad, [Wk1, AA1], so the monad pipeline form (1) has the following more specific form:

"ClCon-State-monad-formula"

"ClCon-State-monad-formula"

This means that some monad operations will not just change the pipeline value but they will also change the pipeline context.

In the monad pipelines of ClCon we store different objects in the contexts for at least one of the following two reasons.

  1. The object will be needed later on in the pipeline.

  2. The object is hard to compute.

Such objects are training data, ROC data, and classifiers.

Let us list the desired properties of the monad.

  • Rapid specification of non-trivial classification workflows.

  • The monad works with different data types: Dataset, lists of machine learning rules, full arrays.

  • The pipeline values can be of different types. Most monad functions modify the pipeline value; some modify the context; some just echo results.

  • The monad works with single classifier objects and with classifier ensembles.

    • This means support of different classifier measures and ROC plots for both single classifiers and classifier ensembles.
  • The monad allows of cursory examination and summarization of the data.
    • For insight and in order to verify assumptions.
  • The monad has operations to compute importance of variables.

  • We can easily obtain the pipeline value, context, and different context objects for manipulation outside of the monad.

  • We can calculate classification measures using a specified ROC parameter and a class label.

  • We can easily plot different combinations of ROC functions.

The ClCon components and their interaction are given in the following diagram. (The components correspond to the main workflow given in the previous section.)

"ClCon-components-interaction.jpg"

In the diagram above the operations are given in rectangles. Data objects are given in round corner rectangles and classifier objects are given in round corner squares.

The main ClCon operations implicitly put in the context or utilize from the context the following objects:

  • training data,

  • test data,

  • validation data,

  • classifier (a classifier function or an association of classifier functions),

  • ROC data,

  • variable names list.

Note the that the monadic set of types of ClCon pipeline values is fairly heterogenous and certain awareness of "the current pipeline value" is assumed when composing ClCon pipelines.

Obviously, we can put in the context any object through the generic operations of the State monad of the package "StateMonadGenerator.m", [AAp1].

ClCon overview

When using a monad we lift certain data into the "monad space", using monad’s operations we navigate computations in that space, and at some point we take results from it.

With the approach taken in this document the "lifting" into the ClCon monad is done with the function ClConUnit. Results from the monad can be obtained with the functions ClConTakeValue, ClConContext, or with the other ClCon functions with the prefix "ClConTake" (see below.)

Here is a corresponding diagram of a generic computation with the ClCon monad:

"ClCon-pipeline"

Remark: It is a good idea to compare the diagram with formulas (1) and (2).

Let us examine a concrete ClCon pipeline that corresponds to the diagram above. In the following table each pipeline operation is combined together with a short explanation and the context keys after its execution.

"ClCon-pipeline-TraceMonad-table"

"ClCon-pipeline-TraceMonad-table"

Here is the output of the pipeline:

"ClCon-pipeline-TraceMonad-Echo-output"

"ClCon-pipeline-TraceMonad-Echo-output"

In the specified pipeline computation the last column of the dataset is assumed to be the one with the class labels.

The ClCon functions are separated into four groups:

  • operations,

  • setters,

  • takers,

  • State Monad generic functions.

An overview of the those functions is given in the tables in next two sub-sections. The next section, "Monad elements", gives details and examples for the usage of the ClCon operations.

Monad functions interaction with the pipeline value and context

The following table gives an overview the interaction of the ClCon monad functions with the pipeline value and context.

"ClCon-table-of-operations-setters-takers"

"ClCon-table-of-operations-setters-takers"

Several functions that use ROC data have two rows in the table because they calculate the needed ROC data if it is not available in the monad context.

State monad functions

Here are the ClCon State Monad functions (generated using the prefix "ClCon", [AAp1, AA1]):

"ClCon-StateMonad-functions-table"

"ClCon-StateMonad-functions-table"

Monad elements

In this section we show that ClCon has all of the properties listed in the previous section.

The monad head

The monad head is ClCon. Anything wrapped in ClCon can serve as monad’s pipeline value. It is better though to use the constructor ClConUnit. (Which adheres to the definition in [Wk1].)

ClCon[{{1, "a"}, {2, "b"}}, <||>]⟹ClConSummarizeData;
"ClCon-monad-head-example"

"ClCon-monad-head-example"

Lifting data to the monad

The function lifting the data into the monad ClCon is ClConUnit.

The lifting to the monad marks the beginning of the monadic pipeline. It can be done with data or without data. Examples follow.

ClConUnit[dsData]⟹ClConSummarizeData;
"ClCon-lifting-data-example-1"

"ClCon-lifting-data-example-1"

ClConUnit[]⟹ClConSetTrainingData[dsData]⟹ClConSummarizeData;
"ClCon-lifting-data-example-2"

"ClCon-lifting-data-example-2"

(See the sub-section "Setters and takers" for more details of setting and taking values in ClCon contexts.)

Currently the monad can deal with data in the following forms:

  • datasets,

  • matrices,

  • lists of example->label rules.

The ClCon monad also has the non-monadic function ClConToNormalClassifierData which can be used to convert datasets and matrices to lists of example->label rules. Here is an example:

Short[ClConToNormalClassifierData[dsData], 3]

(*
 {{639, 0, 9} -> "0", {121, 1, 1} -> "1", {309, 0, 9} ->  "0", {648, 0, 8} -> "0", {995, 2, 5} -> "2", {127, 1, 7} -> "1", {908, 2, 8} -> "2", {564, 0, 4} -> "0", {380, 2, 0} -> "2", {860, 2, 0} -> "2",
 <<80>>,
 {464, 2, 4} -> "2", {449, 2, 9} -> "2", {522, 0, 2} -> "0", {288, 0, 8} -> "0", {51, 0, 1} -> "0", {108, 0, 8} -> "0", {76, 1, 6} -> "1", {706, 1, 6} -> "1", {765, 0, 5} -> "0", {195, 0, 5} -> "0"}
*)

When the data lifted to the monad is a dataset or a matrix it is assumed that the last column has the class labels. WL makes it easy to rearrange columns in such a way the any column of dataset or a matrix to be the last.

Data splitting

The splitting is made with ClConSplitData, which takes up to two arguments and options. The first argument specifies the fraction of training data. The second argument — if given — specifies the fraction of the validation part of the training data. If the value of option Method is "LabelsProportional", then the splitting is done in correspondence of the class labels tallies. ("LabelsProportional" is the default value.) Data splitting demonstration examples follow.

Here are the dimensions of the dataset dsData:

Dimensions[dsData]

(* {100, 4} *)

Here we split the data into 70% for training and 30% for testing and then we verify that the corresponding number of rows add to the number of rows of dsData:

val = ClConUnit[dsData]⟹ClConSplitData[0.7]⟹ClConTakeValue;
Map[Dimensions, val]
Total[First /@ %]

(* 
 <|"trainingData" -> {69, 4}, "testData" -> {31, 4}|>
 100 
*)

Note that if Method is not "LabelsProportional" we get slightly different results.

val = ClConUnit[dsData]⟹ClConSplitData[0.7, Method -> "Random"]⟹ClConTakeValue;
Map[Dimensions, val]
Total[First /@ %]

(*
  <|"trainingData" -> {70, 4}, "testData" -> {30, 4}|>
 100 
*)

In the following code we split the data into 70% for training and 30% for testing, then the training data is further split into 90% for training and 10% for classifier training validation; then we verify that the number of rows add up.

val = ClConUnit[dsData]⟹ClConSplitData[0.7, 0.1]⟹ClConTakeValue;
Map[Dimensions, val]
Total[First /@ %]

(*
 <|"trainingData" -> {61, 4}, "testData" -> {31, 4}, "validationData" -> {8, 4}|>
 100
*)

Classifier training

The monad ClCon supports both single classifiers obtained with Classify and classifier ensembles obtained with Classify and managed with the package "ClassifierEnsembles.m", [AAp4].

Single classifier training

With the following pipeline we take the Titanic data, split it into 75/25 % parts, train a Logistic Regression classifier, and finally take that classifier from the monad.

cf =
  ClConUnit[dsTitanic]⟹
   ClConSplitData[0.75]⟹
   ClConMakeClassifier["LogisticRegression"]⟹
   ClConTakeClassifier;

Here is information about the obtained classifier:

ClassifierInformation[cf, "TrainingTime"]

(* Quantity[3.84008, "Seconds"] *)

If we want to pass parameters to the classifier training we can use the Method option. Here we train a Random Forest classifier with 400 trees:

cf =
  ClConUnit[dsTitanic]⟹
   ClConSplitData[0.75]⟹
   ClConMakeClassifier[Method -> {"RandomForest", "TreeNumber" -> 400}]⟹
   ClConTakeClassifier;

ClassifierInformation[cf, "TreeNumber"]

(* 400 *)

Classifier ensemble training

With the following pipeline we take the Titanic data, split it into 75/25 % parts, train a classifier ensemble of three Logistic Regression classifiers and two Nearest Neighbors classifiers using random sampling of 90% of the training data, and finally take that classifier ensemble from the monad.

ensemble =
  ClConUnit[dsTitanic]⟹
   ClConSplitData[0.75]⟹
   ClConMakeClassifier[{{"LogisticRegression", 0.9, 3}, {"NearestNeighbors", 0.9, 2}}]⟹
   ClConTakeClassifier;

The classifier ensemble is simply an association with keys that are automatically assigned names and corresponding values that are classifiers.

ensemble
"ClCon-ensemble-classifier-example-1"

"ClCon-ensemble-classifier-example-1"

Here are the training times of the classifiers in the obtained ensemble:

ClassifierInformation[#, "TrainingTime"] & /@ ensemble

(*
 <|"LogisticRegression[1,0.9]" -> Quantity[3.47836, "Seconds"], 
   "LogisticRegression[2,0.9]" -> Quantity[3.47681, "Seconds"], 
   "LogisticRegression[3,0.9]" -> Quantity[3.4808, "Seconds"], 
   "NearestNeighbors[1,0.9]" -> Quantity[1.82454, "Seconds"], 
   "NearestNeighbors[2,0.9]" -> Quantity[1.83804, "Seconds"]|>
*)

A more precise specification can be given using associations. The specification

<|"method" -> "LogisticRegression", "sampleFraction" -> 0.9, "numberOfClassifiers" -> 3, "samplingFunction" -> RandomChoice|>

says "make three Logistic Regression classifiers, for each taking 90% of the training data using the function RandomChoice."

Here is a pipeline specification equivalent to the pipeline specification above:

ensemble2 =
  ClConUnit[dsTitanic]⟹
   ClConSplitData[0.75]⟹
   ClConMakeClassifier[{
       <|"method" -> "LogisticRegression", 
         "sampleFraction" -> 0.9, 
         "numberOfClassifiers" -> 3, 
         "samplingFunction" -> RandomSample|>, 
       <|"method" -> "NearestNeighbors", 
         "sampleFraction" -> 0.9, 
         "numberOfClassifiers" -> 2, 
         "samplingFunction" -> RandomSample|>}]⟹
   ClConTakeClassifier;

ensemble2
"ClCon-ensemble-classifier-example-2"

"ClCon-ensemble-classifier-example-2"

Classifier testing

Classifier testing is done with the testing data in the context.

Here is a pipeline that takes the Titanic data, splits it, and trains a classifier:

p =
  ClConUnit[dsTitanic]⟹
   ClConSplitData[0.75]⟹
   ClConMakeClassifier["DecisionTree"];

Here is how we compute selected classifier measures:

p⟹
 ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall", "FalsePositiveRate"}]⟹
 ClConTakeValue

(*
 <|"Accuracy" -> 0.792683, 
   "Precision" -> <|"died" -> 0.802691, "survived" -> 0.771429|>, 
   "Recall" -> <|"died" -> 0.881773, "survived" -> 0.648|>, 
   "FalsePositiveRate" -> <|"died" -> 0.352, "survived" -> 0.118227|>|>
*)

(The measures are listed in the function page of ClassifierMeasurements.)

Here we show the confusion matrix plot:

p⟹ClConClassifierMeasurements["ConfusionMatrixPlot"]⟹ClConEchoValue;
"ClCon-classifier-testing-ConfusionMatrixPlot-echo"

"ClCon-classifier-testing-ConfusionMatrixPlot-echo"

Here is how we plot ROC curves by specifying the ROC parameter range and the image size:

p⟹ClConROCPlot["FPR", "TPR", "ROCRange" -> Range[0, 1, 0.1], ImageSize -> 200];
"ClCon-classifier-testing-ROCPlot-echo"

"ClCon-classifier-testing-ROCPlot-echo"

Remark: ClCon uses the package ROCFunctions.m, [AAp5], which implements all functions defined in [Wk2].

Here we plot ROC functions values (y-axis) over the ROC parameter (x-axis):

p⟹ClConROCListLinePlot[{"ACC", "TPR", "FPR", "SPC"}];
ClCon-classifier-testing-ROCListLinePlot-echo

ClCon-classifier-testing-ROCListLinePlot-echo

Note of the "ClConROC*Plot" functions automatically echo the plots. The plots are also made to be the pipeline value. Using the option specification "Echo"->False the automatic echoing of plots can be suppressed. With the option "ClassLabels" we can focus on specific class labels.

p⟹
  ClConROCListLinePlot[{"ACC", "TPR", "FPR", "SPC"}, "Echo" -> False, "ClassLabels" -> "survived", ImageSize -> Medium]⟹
  ClConEchoValue;
"ClCon-classifier-testing-ROCListLinePlot-survived-echo"

"ClCon-classifier-testing-ROCListLinePlot-survived-echo"

Variable importance finding

Using the pipeline constructed above let us find the most decisive variables using systematic random shuffling (as explained in [AA3]):

p⟹
 ClConAccuracyByVariableShuffling⟹
 ClConTakeValue

(*
 <|None -> 0.792683, "id" -> 0.664634, "passengerClass" -> 0.75, "passengerAge" -> 0.777439, "passengerSex" -> 0.612805|>
*)

We deduce that "passengerSex" is the most decisive variable because its corresponding classification success rate is the smallest. (See [AA3] for more details.)

Using the option "ClassLabels" we can focus on specific class labels:

p⟹ClConAccuracyByVariableShuffling["ClassLabels" -> "survived"]⟹ClConTakeValue

(*
 <|None -> {0.771429}, "id" -> {0.595506}, "passengerClass" -> {0.731959}, "passengerAge" -> {0.71028}, "passengerSex" -> {0.414414}|>
*)

Setters and takers

The values from the monad context can be set or obtained with the corresponding "setters" and "takers" functions as summarized in previous section.

For example:

p⟹ClConTakeClassifier

(* ClassifierFunction[__] *) 

Short[Normal[p⟹ClConTakeTrainingData]]

(*
  {<|"id" -> 858, "passengerClass" -> "3rd", "passengerAge" -> 30, "passengerSex" -> "male", "passengerSurvival" -> "survived"|>, <<979>> }
*)

Short[Normal[p⟹ClConTakeTestData]]

(* {<|"id" -> 285, "passengerClass" -> "1st", "passengerAge" -> 60, "passengerSex" -> "female", "passengerSurvival" -> "survived"|> , <<327>> } 
*)

p⟹ClConTakeVariableNames

(* {"id", "passengerClass", "passengerAge", "passengerSex", "passengerSurvival"} *)

If other values are put in the context they can be obtained through the (generic) function ClConTakeContext, [AAp1]:

p = ClConUnit[RandomReal[1, {2, 2}]]⟹ClConAddToContext["data"];

(p⟹ClConTakeContext)["data"]

(* {{0.815836, 0.191562}, {0.396868, 0.284587}} *)

Another generic function from [AAp1] is ClConTakeValue (used many times above.)

Example use cases

Classification with MNIST data

Here we show an example of using ClCon with the reasonably large dataset of images MNIST, [YL1].

mnistData = ExampleData[{"MachineLearning", "MNIST"}, "Data"];

SeedRandom[3423]
p =
  ClConUnit[RandomSample[mnistData, 20000]]⟹
   ClConSplitData[0.7]⟹
   ClConSummarizeData⟹
   ClConMakeClassifier["NearestNeighbors"]⟹
   ClConClassifierMeasurements[{"Accuracy", "ConfusionMatrixPlot"}]⟹
   ClConEchoValue;
"ClCon-MNIST-example-output"

"ClCon-MNIST-example-output"

Here we plot the ROC curve for a specified digit:

p⟹ClConROCPlot["ClassLabels" -> 5];

Conditional continuation

In this sub-section we show how the computations in a ClCon pipeline can be stopped or continued based on a certain condition.

The pipeline below makes a simple classifier ("LogisticRegression") for the WineQuality data, and if the recall for the important label ("high") is not large enough makes a more complicated classifier ("RandomForest"). The pipeline marks intermediate steps by echoing outcomes and messages.

SeedRandom[267]
res =
  ClConUnit[dsWineQuality[All, Join[#, <|"wineQuality" -> If[#wineQuality >= 7, "high", "low"]|>] &]]⟹
   ClConSplitData[0.75, 0.2]⟹
   ClConSummarizeData(* summarize the data *)⟹
   ClConMakeClassifier[Method -> "LogisticRegression"](* training a simple classifier *)⟹
   ClConROCPlot["FPR", "TPR", "ROCPointCallouts" -> False]⟹
   ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall", "FalsePositiveRate"}]⟹
   ClConEchoValue⟹
   ClConIfElse[#["Recall", "high"] > 0.70 & (* criteria based on the recall for "high" *),
    ClConEcho["Good recall for \"high\"!", "Success:"],
    ClConUnit[##]⟹
      ClConEcho[Style["Recall for \"high\" not good enough... making a large random forest.", Darker[Red]], "Info:"]⟹
      ClConMakeClassifier[Method -> {"RandomForest", "TreeNumber" -> 400}](* training a complicated classifier *)⟹
      ClConROCPlot["FPR", "TPR", "ROCPointCallouts" -> False]⟹
      ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall", "FalsePositiveRate"}]⟹
      ClConEchoValue &];
"ClCon-conditional-continuation-example-output"

"ClCon-conditional-continuation-example-output"

We can see that the recall with the more complicated is classifier is higher. Also the ROC plots of the second classifier are visibly closer to the ideal one. Still, the recall is not good enough, we have to find a threshold that is better that the default one. (See the next sub-section.)

Classification with custom thresholds

(In this sub-section we use the monad from the previous sub-section.)

Here we compute classification measures using the threshold 0.3 for the important class label ("high"):

res⟹
 ClConClassifierMeasurementsByThreshold[{"Accuracy", "Precision", "Recall", "FalsePositiveRate"}, "high" -> 0.3]⟹
 ClConTakeValue

(* <|"Accuracy" -> 0.782857,  "Precision" -> <|"high" -> 0.498871, "low" -> 0.943734|>, 
     "Recall" -> <|"high" -> 0.833962, "low" -> 0.76875|>, 
     "FalsePositiveRate" -> <|"high" -> 0.23125, "low" -> 0.166038|>|> *)

We can see that the recall for "high" is fairly large and the rest of the measures have satisfactory values. (The accuracy did not drop that much, and the false positive rate is not that large.)

Here we compute suggestions for the best thresholds:

res (* start with a previous monad *)⟹
  ClConROCPlot[ImageSize -> 300] (* make ROC plots *)⟹
  ClConSuggestROCThresholds[3] (* find the best 3 thresholds per class label *)⟹
  ClConEchoValue (* echo the result *);
"ClCon-best-thresholds-example-output"

"ClCon-best-thresholds-example-output"

The suggestions are the ROC points that closest to the point {0, 1} (which corresponds to the ideal classifier.)

Here is a way to use threshold suggestions within the monad pipeline:

res⟹
  ClConSuggestROCThresholds⟹
  ClConEchoValue⟹
  (ClConUnit[##]⟹
    ClConClassifierMeasurementsByThreshold[{"Accuracy", "Precision", "Recall"}, "high" -> First[#1["high"]]] &)⟹
  ClConEchoValue;

(*
value: <|high->{0.35},low->{0.65}|>
value: <|Accuracy->0.825306,Precision-><|high->0.571831,low->0.928736|>,Recall-><|high->0.766038,low->0.841667|>|> 
*)

Unit tests

The development of ClCon was done with two types of unit tests: (1) directly specified tests, [AAp11], and (2) tests based on randomly generated pipelines, [AAp12].

Both unit test packages should be further extended in order to provide better coverage of the functionalities and illustrate — and postulate — pipeline behavior.

Directly specified tests

Here we run the unit tests file "MonadicContextualClassification-Unit-Tests.wlt", [AAp11]:

AbsoluteTiming[
 testObject = TestReport["~/MathematicaForPrediction/UnitTests/MonadicContextualClassification-Unit-Tests.wlt"]
]
"ClCon-direct-unit-tests-TestReport-icon"

"ClCon-direct-unit-tests-TestReport-icon"

The natural language derived test ID’s should give a fairly good idea of the functionalities covered in [AAp11].

Values[Map[#["TestID"] &, testObject["TestResults"]]]

(* {"LoadPackage", "EvenOddDataset", "EvenOddDataMLRules", \
"DataToContext-no-[]", "DataToContext-with-[]", \
"ClassifierMaking-with-Dataset-1", "ClassifierMaking-with-MLRules-1", \
"AccuracyByVariableShuffling-1", "ROCData-1", \
"ClassifierEnsemble-different-methods-1", \
"ClassifierEnsemble-different-methods-2-cont", \
"ClassifierEnsemble-different-methods-3-cont", \
"ClassifierEnsemble-one-method-1", "ClassifierEnsemble-one-method-2", \
"ClassifierEnsemble-one-method-3-cont", \
"ClassifierEnsemble-one-method-4-cont", "AssignVariableNames-1", \
"AssignVariableNames-2", "AssignVariableNames-3", "SplitData-1", \
"Set-and-take-training-data", "Set-and-take-test-data", \
"Set-and-take-validation-data", "Partial-data-summaries-1", \
"Assign-variable-names-1", "Split-data-100-pct", \
"MakeClassifier-with-empty-unit-1", \
"No-rocData-after-second-MakeClassifier-1"} *)

Random pipelines tests

Since the monad ClCon is a DSL it is natural to test it with a large number of randomly generated "sentences" of that DSL. For the ClCon DSL the sentences are ClCon pipelines. The package "MonadicContextualClassificationRandomPipelinesUnitTests.m", [AAp12], has functions for generation of ClCon random pipelines and running them as verification tests. A short example follows.

Generate pipelines:

SeedRandom[234]
pipelines = MakeClConRandomPipelines[300];
Length[pipelines]

(* 300 *)

Here is a sample of the generated pipelines:

Block[{DoubleLongRightArrow, pipelines = RandomSample[pipelines, 6]}, 
 Clear[DoubleLongRightArrow];
 pipelines = pipelines /. {_Dataset -> "ds", _?DataRulesForClassifyQ -> "mlrData"};
 GridTableForm[
  Map[List@ToString[DoubleLongRightArrow @@ #, FormatType -> StandardForm] &, pipelines], 
  TableHeadings -> {"pipeline"}]
]
AutoCollapse[]
"ClCon-random-pipelines-tests-sample-table"

"ClCon-random-pipelines-tests-sample-table"

Here we run the pipelines as unit tests:

AbsoluteTiming[
 res = TestRunClConPipelines[pipelines, "Echo" -> True];
]

(* {350.083, Null} *)

From the test report results we see that a dozen tests failed with messages, all of the rest passed.

rpTRObj = TestReport[res]
"ClCon-random-pipelines-TestReport-icon"

"ClCon-random-pipelines-TestReport-icon"

(The message failures, of course, have to be examined — some bugs were found in that way. Currently the actual test messages are expected.)

Future plans

Workflow operations

Outliers

Better outliers finding and manipulation incorporation in ClCon. Currently only outlier finding is surfaced in [AAp3]. (The package internally has other related functions.)

ClConUnit[dsTitanic[Select[#passengerSex == "female" &]]]⟹
 ClConOutlierPosition⟹
 ClConTakeValue

(* {4, 17, 21, 22, 25, 29, 38, 39, 41, 59} *)

Dimension reduction

Support of dimension reduction application — quick construction of pipelines that allow the applying different dimension reduction methods.

Currently with ClCon dimension reduction is applied only to data the non-label parts of which can be easily converted into numerical matrices.

ClConUnit[dsWineQuality]⟹
  ClConSplitData[0.7]⟹
  ClConReduceDimension[2, "Echo" -> True]⟹
  ClConRetrieveFromContext["svdRes"]⟹
  ClConEchoFunctionValue["SVD dimensions:", Dimensions /@ # &]⟹
  ClConSummarizeData;
"ClCon-dimension-reduction-example-echo"

"ClCon-dimension-reduction-example-echo"

Conversational agent

Using the packages [AAp13, AAp15] we can generate ClCon pipelines with natural commands. The plan is to develop and document those functionalities further.

Implementation notes

The ClCon package, MonadicContextualClassification.m, [AAp3], is based on the packages [AAp1, AAp4-AAp9]. It was developed using Mathematica and the Mathematica plug-in for IntelliJ IDEA, by Patrick Scheibe , [PS1]. The following diagram shows the development workflow.

"ClCon-development-cycle"

Some observations and morals follow.

  • Making the unit tests [AAp11] made the final implementation stage much more comfortable.
    • Of course, in retrospect that is obvious.
  • Initially "MonadicContextualClassification.m" was not real a package, just a collection of global context functions with the prefix "ClCon". This made some programming design decisions harder, slower, and more cumbersome. By making a proper package the development became much easier because of the "peace of mind" brought by the context feature encapsulation.
  • The making of random pipeline tests, [AAp12], helped catch a fair amount of inconvenient "features" and bugs.
    • (Both tests sets [AAp11, AAp12] can be made to be more comprehensive.)
  • The design of a conversational agent for producing ClCon pipelines with natural language commands brought a very fruitful viewpoint on the overall functionalities and the determination and limits of the ClCon development goals. See [AAp13, AAp14, AAp15].

  • "Eat your own dog food", or in this case: "use ClCon functionalities to implement ClCon functionalities."

    • Since we are developing a DSL it is natural to use that DSL for its own advancement.

    • Again, in retrospect that is obvious. Also probably should be seen as a consequence of practicing a certain code refactoring discipline.

    • The reason to list that moral is that often it is somewhat "easier" to implement functionalities thinking locally, ad-hoc, forgetting or not reviewing other, already implemented functions.

  • In order come be better design and find inconsistencies: write many pipelines and discuss with co-workers.

    • This is obvious. I would like to mention that a somewhat good alternative to discussions is (i) writing this document and related ones and (ii) making, running, and examining of the random pipelines tests.

References

Packages

[AAp1] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

[AAp2] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

[AAp3] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

[AAp4] Anton Antonov, Classifier ensembles functions Mathematica package, (2016), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m .

[AAp5] Anton Antonov, Receiver operating characteristic functions Mathematica package, (2016), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/ROCFunctions.m .

[AAp6] Anton Antonov, Variable importance determination by classifiers implementation in Mathematica,(2015), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/VariableImportanceByClassifiers.m .

[AAp7] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

[AAp8] Anton Antonov, Cross tabulation implementation in Mathematica, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/CrossTabulate.m .

[AAp9] Anton Antonov, SSparseMatrix Mathematica package, (2018), MathematicaForPrediction at GitHub.

[AAp10] Anton Antonov, Obtain and transform Mathematica machine learning data-sets, (2018), MathematicaVsR at GitHub.

[AAp11] Anton Antonov, Monadic contextual classification Mathematica unit tests, (2018), MathematicaVsR at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassification-Unit-Tests.wlt .

[AAp12] Anton Antonov, Monadic contextual classification random pipelines Mathematica unit tests, (2018), MathematicaVsR at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassificationRandomPipelinesUnitTests.m .

ConverationalAgents Packages

[AAp13] Anton Antonov, Classifier workflows grammar in EBNF, (2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

[AAp14] Anton Antonov, Classifier workflows grammar Mathematica unit tests, (2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

[AAp15] Anton Antonov, ClCon translator Mathematica package, (2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

MathematicaForPrediction articles

[AA1] Anton Antonov, Monad code generation and extension, (2017), MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction.

[AA2] Anton Antonov, "ROC for classifier ensembles, bootstrapping, damaging, and interpolation", (2016), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/ .

[AA3] Anton Antonov, "Importance of variables investigation guide", (2016), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Importance-of-variables-investigation-guide.md .

Other

[Wk1] Wikipedia entry, Monad, URL: https://en.wikipedia.org/wiki/Monad_(functional_programming) .

[Wk2] Wikipedia entry, Receiver operating characteristic, URL: https://en.wikipedia.org/wiki/Receiver_operating_characteristic .

[YL1] Yann LeCun et al., MNIST database site. URL: http://yann.lecun.com/exdb/mnist/ .

[PS1] Patrick Scheibe, Mathematica (Wolfram Language) support for IntelliJ IDEA, (2013-2018), Mathematica-IntelliJ-Plugin at GitHub. URL: https://github.com/halirutan/Mathematica-IntelliJ-Plugin .

The Great conversation in USA presidential speeches

Introduction

This document shows a way to chart in Mathematica / WL the evolution of topics in collections of texts. The making of this document (and related code) is primarily motivated by the fascinating concept of the Great Conversation, [Wk1, MA1]. In brief, all western civilization books are based on 103 great ideas; if we find the great ideas each significant book is based on we can construct a time-line (spanning centuries) of the great conversation between the authors; see [MA1, MA2, MA3].

Instead of finding the great ideas in a text collection we extract topics statistically, using dimension reduction with Non-Negative Matrix Factorization (NNMF), [AAp3, AA1, AA2].

The presented computational results are based on the text collections of State of the Union speeches of USA presidents [D2]. The code in this document can be easily configured to use the much smaller text collection [D1] available online and in Mathematica/WL. (The collection [D1] is fairly small, 51 documents; the collection [D2] is much larger, 2453 documents.)

The procedures (and code) described in this document, of course, work on other types of text collections. For example: movie reviews, podcasts, editorial articles of a magazine, etc.

A secondary objective of this document is to illustrate the use of the monadic programming pipeline as a Software design pattern, [AA3]. In order to make the code concise in this document I wrote the package MonadicLatentSemanticAnalysis.m, [AAp5]. Compare with the code given in [AA1].

The very first version of this document was written for the 2017 summer course “Data Science for the Humanities” at the University of Oxford, UK.

Outline of the procedure applied

The procedure described in this document has the following steps.

  1. Get a collection of documents with known dates of publishing.
    • Or other types of tags associated with the documents.
  2. Do preliminary analysis of the document collection.
    • Number of documents; number of unique words.

    • Number of words per document; number of documents per word.

    • (Some of the statistics of this step are done easier after the Linear vector space representation step.)

  3. Optionally perform Natural Language Processing (NLP) tasks.

    1. Obtain or derive stop words.

    2. Remove stop words from the texts.

    3. Apply stemming to the words in the texts.

  4. Linear vector space representation.

    • This means that we represent the collection with a document-word matrix.

    • Each unique word is a basis vector in that space.

    • For each document the corresponding point in that space is derived from the number of appearances of document’s words.

  5. Extract topics.

    • In this document NNMF is used.

    • In order to obtain better results with NNMF some experimentation and refinements of the topics search have to be done.

  6. Map the documents over the extracted topics.

    • The original matrix of the vector space representation is replaced with a matrix with columns representing topics (instead of words.)
  7. Order the topics according to their presence across the years (or other related tags).
    • This can be done with hierarchical clustering.

    • Alternatively,

      1. for a given topic find the weighted mean of the years of the documents that have that topic, and

      2. order the topics according to those mean values.

  8. Visualize the evolution of the documents according to their topics.

    1. This can be done by simply finding the contingency matrix year vs topic.

    2. For the president speeches we can use the president names for time-line temporal axis instead of years.

      • Because the corresponding time intervals of president office occupation do not overlap.

Remark: Some of the functions used in this document combine several steps into one function call (with corresponding parameters.)

Packages

This loads the packages [AAp1-AAp8]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicLatentSemanticAnalysis.m"];
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/Misc/HeatmapPlot.m"];
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/Misc/RSparseMatrix.m"];

(Note that some of the packages that are imported automatically by [AAp5].)

The functions of the central package in this document, [AAp5], have the prefix “LSAMon”. Here is a sample of those names:

Short@Names["LSAMon*"]

(* {"LSAMon", "LSAMonAddToContext", "LSAMonApplyTermWeightFunctions", <>, "LSAMonUnit", "LSAMonUnitQ", "LSAMonWhen"} *)

Data load

In this section we load a text collection from a specified source.

The text collection from “Presidential Nomination Acceptance Speeches”, [D1], is small and can be used for multiple code verifications and re-runnings. The “State of Union addresses of USA presidents” text collection from [D2] was converted to a Mathematica/WL object by Christopher Wolfram (and sent to me in a private communication.) The text collection [D2] provides far more interesting results (and they are shown below.)

If[True,
  speeches = ResourceData[ResourceObject["Presidential Nomination Acceptance Speeches"]];
  names = StringSplit[Normal[speeches[[All, "Person"]]][[All, 2]], "::"][[All, 1]],

  (*ELSE*)
  (*State of the union addresses provided by Christopher Wolfram. *)      
  Get["~/MathFiles/Digital humanities/Presidential speeches/speeches.mx"];
  names = Normal[speeches[[All, "Name"]]];
];

dates = Normal[speeches[[All, "Date"]]];
texts = Normal[speeches[[All, "Text"]]];

Dimensions[speeches]

(* {2453, 4} *)

Basic statistics for the texts

Using different contingency matrices we can derive basic statistical information about the document collection. (The document-word matrix is a contingency matrix.)

First we convert the text data in long-form:

docWordRecords = 
  Join @@ MapThread[
    Thread[{##}] &, {Range@Length@texts, names, 
     DateString[#, {"Year"}] & /@ dates, 
     DeleteStopwords@*TextWords /@ ToLowerCase[texts]}, 1];

Here is a sample of the rows of the long-form:

GridTableForm[RandomSample[docWordRecords, 6], 
 TableHeadings -> {"document index", "name", "year", "word"}]

Here is a summary:

Multicolumn[
 RecordsSummary[docWordRecords, {"document index", "name", "year", "word"}, "MaxTallies" -> 8], 4, Dividers -> All, Alignment -> Top]

Using the long form we can compute the document-word matrix:

ctMat = CrossTabulate[docWordRecords[[All, {1, -1}]]];
MatrixPlot[Transpose@Sort@Map[# &, Transpose[ctMat@"XTABMatrix"]], 
 MaxPlotPoints -> 300, ImageSize -> 800, 
 AspectRatio -> 1/3]

Here is the president-word matrix:

ctMat = CrossTabulate[docWordRecords[[All, {2, -1}]]];
MatrixPlot[Transpose@Sort@Map[# &, Transpose[ctMat@"XTABMatrix"]], MaxPlotPoints -> 300, ImageSize -> 800, AspectRatio -> 1/3]

Here is an alternative way to compute text collection statistics through the document-word matrix computed within the monad LSAMon:

LSAMonUnit[texts]⟹LSAMonEchoTextCollectionStatistics[];

Procedure application

Stop words

Here is one way to obtain stop words:

stopWords = Complement[DictionaryLookup["*"], DeleteStopwords[DictionaryLookup["*"]]];
Length[stopWords]
RandomSample[stopWords, 12]

(* 304 *)

(* {"has", "almost", "next", "WHO", "seeming", "together", "rather", "runners-up", "there's", "across", "cannot", "me"} *)

We can complete this list with additional stop words derived from the collection itself. (Not done here.)

Linear vector space representation and dimension reduction

Remark: In the rest of the document we use “term” to mean “word” or “stemmed word”.

The following code makes a document-term matrix from the document collection, exaggerates the representations of the terms using “TF-IDF”, and then does topic extraction through dimension reduction. The dimension reduction is done with NNMF; see [AAp3, AA1, AA2].

SeedRandom[312]

mObj =
  LSAMonUnit[texts]⟹
   LSAMonMakeDocumentTermMatrix[{}, stopWords]⟹
   LSAMonApplyTermWeightFunctions[]⟹
   LSAMonTopicExtraction[Max[5, Ceiling[Length[texts]/100]], 60, 12, "MaxSteps" -> 6, "PrintProfilingInfo" -> True];

This table shows the pipeline commands above with comments:

Detailed description

The monad object mObj has a context of named values that is an Association with the following keys:

Keys[mObj⟹LSAMonTakeContext]

(* {"texts", "docTermMat", "terms", "wDocTermMat", "W", "H", "topicColumnPositions", "automaticTopicNames"} *)

Let us clarify the values by briefly describing the computational steps.

  1. From texts we derive the document-term matrix \text{docTermMat}\in \mathbb{R}^{m \times n}, where n is the number of documents and m is the number of terms.
    • The terms are words or stemmed words.

    • This is done with LSAMonMakeDocumentTermMatrix.

  2. From docTermMat is derived the (weighted) matrix wDocTermMat using “TF-IDF”.

    • This is done with LSAMonApplyTermWeightFunctions.
  3. Using docTermMat we find the terms that are present in sufficiently large number of documents and their column indices are assigned to topicColumnPositions.

  4. Matrix factorization.

    1. Assign to \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]], \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]]\in \mathbb{R}^{m_1 \times n}, where m_1 = |topicsColumnPositions|.

    2. Compute using NNMF the factorization \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]]\approx H W, where W\in \mathbb{R}^{k \times n}, H\in \mathbb{R}^{k \times m_1}, and k is the number of topics.

    3. The values for the keys “W, “H”, and “topicColumnPositions” are computed and assigned by LSAMonTopicExtraction.

  5. From the top terms of each topic are derived automatic topic names and assigned to the key automaticTopicNames in the monad context.

    • Also done by LSAMonTopicExtraction.

Statistical thesaurus

At this point in the object mObj we have the factors of NNMF. Using those factors we can find a statistical thesaurus for a given set of words. The following code calculates such a thesaurus, and echoes it in a tabulated form.

queryWords = {"arms", "banking", "economy", "education", "freedom", 
   "tariff", "welfare", "disarmament", "health", "police"};

mObj⟹
  LSAMonStatisticalThesaurus[queryWords, 12]⟹
  LSAMonEchoStatisticalThesaurus[];

By observing the thesaurus entries we can see that the words in each entry are semantically related.

Note, that the word “welfare” strongly associates with “[applause]”. The rest of the query words do not, which can be seen by examining larger thesaurus entries:

thRes =
  mObj⟹
   LSAMonStatisticalThesaurus[queryWords, 100]⟹
   LSAMonTakeValue;
Cases[thRes, "[applause]", Infinity]

(* {"[applause]", "[applause]"} *)

The second “[applause]” associated word is “education”.

Detailed description

The statistical thesaurus is computed by using the NNMF’s right factor H.

For a given term, its corresponding column in H is found and the nearest neighbors of that column are found in the space \mathbb{R}^{m_1} using Euclidean norm.

Extracted topics

The topics are the rows of the right factor H of the factorization obtained with NNMF .

Let us tabulate the topics found above with LSAMonTopicExtraction :

mObj⟹ LSAMonEchoTopicsTable["NumberOfTerms" -> 6, "MagnificationFactor" -> 0.8, Appearance -> "Horizontal"];

Map documents over the topics

The function LSAMonTopicsRepresentation finds the top outliers for each row of NNMF’s left factor W. (The outliers are found using the package [AAp4].) The obtained list of indices gives the topic representation of the collection of texts.

Short@(mObj⟹LSAMonTopicsRepresentation[]⟹LSAMonTakeContext)["docTopicIndices"]

{{53}, {47, 53}, {25}, {46}, {44}, {15, 42}, {18}, <>, {30}, {33}, {7, 60}, {22, 25}, {12, 13, 25, 30, 49, 59}, {48, 57}, {14, 41}}

Further we can see that if the documents have tags associated with them — like author names or dates — we can make a contingency matrix of tags vs topics. (See [AAp8, AA4].) This is also done by the function LSAMonTopicsRepresentation that takes tags as an argument. If the tags argument is Automatic, then the tags are simply the document indices.

Here is a an example:

rsmat = mObj⟹LSAMonTopicsRepresentation[Automatic]⟹LSAMonTakeValue;
MatrixPlot[rsmat]

Here is an example of calling the function LSAMonTopicsRepresentation with arbitrary tags.

rsmat = mObj⟹LSAMonTopicsRepresentation[DateString[#, "MonthName"] & /@ dates]⟹LSAMonTakeValue;
MatrixPlot[rsmat]

Note that the matrix plots above are very close to the charting of the Great conversation that we are looking for. This can be made more obvious by observing the row names and columns names in the tabulation of the transposed matrix rsmat:

Magnify[#, 0.6] &@MatrixForm[Transpose[rsmat]]

Charting the great conversation

In this section we show several ways to chart the Great Conversation in the collection of speeches.

There are several possible ways to make the chart: using a time-line plot, using heat-map plot, and using appropriate tabulation (with MatrixForm or Grid).

In order to make the code in this section more concise the package RSparseMatrix.m, [AAp7, AA5], is used.

Topic name to topic words

This command makes an Association between the topic names and the top topic words.

aTopicNameToTopicTable = 
  AssociationThread[(mObj⟹LSAMonTakeContext)["automaticTopicNames"], 
   mObj⟹LSAMonTopicsTable["NumberOfTerms" -> 12]⟹LSAMonTakeValue];

Here is a sample:

Magnify[#, 0.7] &@ aTopicNameToTopicTable[[1 ;; 3]]

Time-line plot

This command makes a contingency matrix between the documents and the topics (as described above):

rsmat = ToRSparseMatrix[mObj⟹LSAMonTopicsRepresentation[Automatic]⟹LSAMonTakeValue]

This time-plot shows great conversation in the USA presidents state of union speeches:

TimelinePlot[
 Association@
  MapThread[
   Tooltip[#2, aTopicNameToTopicTable[#2]] -> dates[[ToExpression@#1]] &, 
   Transpose[RSparseMatrixToTriplets[rsmat]]], 
 PlotTheme -> "Detailed", ImageSize -> 1000, AspectRatio -> 1/2, PlotLayout -> "Stacked"]

The plot is too cluttered, so it is a good idea to investigate other visualizations.

Topic vs president heatmap

We can use the USA president names instead of years in the Great Conversation chart because the USA presidents terms do not overlap.

This makes a contingency matrix presidents vs topics:

rsmat2 = ToRSparseMatrix[
   mObj⟹LSAMonTopicsRepresentation[
     names]⟹LSAMonTakeValue];

Here we compute the chronological order of the presidents based on the dates of their speeches:

nameToMeanYearRules = 
  Map[#[[1, 1]] -> Mean[N@#[[All, 2]]] &, 
   GatherBy[MapThread[List, {names, ToExpression[DateString[#, "Year"]] & /@ dates}], First]];
ordRowInds = Ordering[RowNames[rsmat2] /. nameToMeanYearRules];

This heat-map plot uses the (experimental) package HeatmapPlot.m, [AAp6]:

Block[{m = rsmat2[[ordRowInds, All]]},
 HeatmapPlot[SparseArray[m], RowNames[m], 
  Thread[Tooltip[ColumnNames[m], aTopicNameToTopicTable /@ ColumnNames[m]]],
  DistanceFunction -> {None, Sort}, ImageSize -> 1000, 
  AspectRatio -> 1/2]
 ]

Note the value of the option DistanceFunction: there is not re-ordering of the rows and columns are reordered by sorting. Also, the topics on the horizontal names have tool-tips.

References

Text data

[D1] Wolfram Data Repository, "Presidential Nomination Acceptance Speeches".

[D2] US Presidents, State of the Union Addresses, Trajectory, 2016. ‪ISBN‬1681240009, 9781681240008‬.

[D3] Gerhard Peters, "Presidential Nomination Acceptance Speeches and Letters, 1880-2016", The American Presidency Project.

[D4] Gerhard Peters, "State of the Union Addresses and Messages", The American Presidency Project.

Packages

[AAp1] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub.

[AAp2] Anton Antonov, Implementation of document-term matrix construction and re-weighting functions in Mathematica(2013), MathematicaForPrediction at GitHub.

[AAp3] Anton Antonov, Implementation of the Non-Negative Matrix Factorization algorithm in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp4] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp5] Anton Antonov, Monadic latent semantic analysis Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp6] Anton Antonov, Heatmap plot Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp7] Anton Antonov, RSparseMatrix Mathematica package, (2015), MathematicaForPrediction at GitHub.

[AAp8] Anton Antonov, Cross tabulation implementation in Mathematica, (2017), MathematicaForPrediction at GitHub.

Books and articles

[AA1] Anton Antonov, "Topic and thesaurus extraction from a document collection", (2013), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, "Statistical thesaurus from NPR podcasts", (2013), MathematicaForPrediction at WordPress blog.

[AA3] Anton Antonov, "Monad code generation and extension", (2017), MathematicaForPrediction at GitHub.

[AA4] Anton Antonov, "Contingency tables creation examples", (2016), MathematicaForPrediction at WordPress blog.

[AA5] Anton Antonov, "RSparseMatrix for sparse matrices with named rows and columns", (2015), MathematicaForPrediction at WordPress blog.

[Wk1] Wikipedia entry, Great Conversation.

[MA1] Mortimer Adler, "The Great Conversation Revisited," in The Great Conversation: A Peoples Guide to Great Books of the Western World, Encyclopædia Britannica, Inc., Chicago,1990, p. 28.

[MA2] Mortimer Adler, "Great Ideas".

[MA3] Mortimer Adler, "How to Think About the Great Ideas: From the Great Books of Western Civilization", 2000, Open Court.

Monad code generation and extension

… in Mathematica / Wolfram Language

Anton Antonov

MathematicaForPrediction at GitHub

MathematicaVsR at GitHub

June 2017

Introduction

This document aims to introduce monadic programming in Mathematica / Wolfram Language (WL) in a concise and code-direct manner. The core of the monad codes discussed is simple, derived from the fundamental principles of Mathematica / WL.

The usefulness of the monadic programming approach manifests in multiple ways. Here are a few we are interested in:

  1. easy to construct, read, and modify sequences of commands (pipelines),
  2. easy to program polymorphic behaviour,
  3. easy to program context utilization.

Speaking informally,

  • Monad programming provides an interface that allows interactive, dynamic creation and change of sequentially structured computations with polymorphic and context-aware behavior.

The theoretical background provided in this document is given in the Wikipedia article on Monadic programming, [Wk1], and the article “The essence of functional programming” by Philip Wadler, [H3]. The code in this document is based on the primary monad definition given in [Wk1,H3]. (Based on the “Kleisli triple” and used in Haskell.)

The general monad structure can be seen as:

  1. a software design pattern;
  2. a fundamental programming construct (similar to class in object-oriented programming);
  3. an interface for software types to have implementations of.

In this document we treat the monad structure as a design pattern, [Wk3]. (After reading [H3] point 2 becomes more obvious. A similar in spirit, minimalistic approach to Object-oriented Design Patterns is given in [AA1].)

We do not deal with types for monads explicitly, we generate code for monads instead. One reason for this is the “monad design pattern” perspective; another one is that in Mathematica / WL the notion of algebraic data type is not needed — pattern matching comes from the core “book of replacement rules” principle.

The rest of the document is organized as follows.

1. Fundamental sections The section “What is a monad?” gives the necessary definitions. The section “The basic Maybe monad” shows how to program a monad from scratch in Mathematica / WL. The section “Extensions with polymorphic behavior” shows how extensions of the basic monad functions can be made. (These three sections form a complete read on monadic programming, the rest of the document can be skipped.)

2. Monadic programming in practice The section “Monad code generation” describes packages for generating monad code. The section “Flow control in monads” describes additional, control flow functionalities. The section “General work-flow of monad code generation utilization” gives a general perspective on the use of monad code generation. The section “Software design with monadic programming” discusses (small scale) software design with monadic programming.

3. Case study sections The case study sections “Contextual monad classification” and “Tracing monad pipelines” hopefully have interesting and engaging examples of monad code generation, extension, and utilization.

What is a monad?

The monad definition

In this document a monad is any set of a symbol m and two operators unit and bind that adhere to the monad laws. (See the next sub-section.) The definition is taken from [Wk1] and [H3] and phrased in Mathematica / WL terms in this section. In order to be brief, we deliberately do not consider the equivalent monad definition based on unit, join, and map (also given in [H3].)

Here are operators for a monad associated with a certain symbol M:

  1. monad unit function (“return” in Haskell notation) is Unit[x_] := M[x];
  2. monad bind function (“>>=” in Haskell notation) is a rule like Bind[M[x_], f_] := f[x] with MatchQ[f[x],M[_]] giving True.

Note that:

  • the function Bind unwraps the content of M[_] and gives it to the function f;
  • the functions fi are responsible to return results wrapped with the monad symbol M.

Here is an illustration formula showing a monad pipeline:

Monad-formula-generic

Monad-formula-generic

From the definition and formula it should be clear that if for the result of Bind[_M,f[x]] the test MatchQ[f[x],_M] is True then the result is ready to be fed to the next binding operation in monad’s pipeline. Also, it is clear that it is easy to program the pipeline functionality with Fold:

Fold[Bind, M[x], {f1, f2, f3}]

(* Bind[Bind[Bind[M[x], f1], f2], f3] *)

The monad laws

The monad laws definitions are taken from [H1] and [H3].In the monad laws given below the symbol “⟹” is for monad’s binding operation and ↦ is for a function in anonymous form.

Here is a table with the laws:

Remark: The monad laws are satisfied for every symbol in Mathematica / WL with List being the unit operation and Apply being the binding operation.

Expected monadic programming features

Looking at formula (1) — and having certain programming experiences — we can expect the following features when using monadic programming.

  • Computations that can be expressed with monad pipelines are easy to construct and read.
  • By programming the binding function we can tuck-in a variety of monad behaviours — this is the so called “programmable semicolon” feature of monads.
  • Monad pipelines can be constructed with Fold, but with suitable definitions of infix operators like DoubleLongRightArrow (⟹) we can produce code that resembles the pipeline in formula (1).
  • A monad pipeline can have polymorphic behaviour by overloading the signatures of fi (and if we have to, Bind.)

These points are clarified below. For more complete discussions see [Wk1] or [H3].

The basic Maybe monad

It is fairly easy to program the basic monad Maybe discussed in [Wk1].

The goal of the Maybe monad is to provide easy exception handling in a sequence of chained computational steps. If one of the computation steps fails then the whole pipeline returns a designated failure symbol, say None otherwise the result after the last step is wrapped in another designated symbol, say Maybe.

Here is the special version of the generic pipeline formula (1) for the Maybe monad:

"Monad-formula-maybe"

“Monad-formula-maybe”

Here is the minimal code to get a functional Maybe monad (for a more detailed exposition of code and explanations see [AA7]):

MaybeUnitQ[x_] := MatchQ[x, None] || MatchQ[x, Maybe[___]];

MaybeUnit[None] := None;
MaybeUnit[x_] := Maybe[x];

MaybeBind[None, f_] := None;
MaybeBind[Maybe[x_], f_] := 
  Block[{res = f[x]}, If[FreeQ[res, None], res, None]];

MaybeEcho[x_] := Maybe@Echo[x];
MaybeEchoFunction[f___][x_] := Maybe@EchoFunction[f][x];

MaybeOption[f_][xs_] := 
  Block[{res = f[xs]}, If[FreeQ[res, None], res, Maybe@xs]];

In order to make the pipeline form of the code we write let us give definitions to a suitable infix operator (like “⟹”) to use MaybeBind:

DoubleLongRightArrow[x_?MaybeUnitQ, f_] := MaybeBind[x, f];
DoubleLongRightArrow[x_, y_, z__] := 
  DoubleLongRightArrow[DoubleLongRightArrow[x, y], z];

Here is an example of a Maybe monad pipeline using the definitions so far:

data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};

MaybeUnit[data]⟹(* lift data into the monad *)
 (Maybe@ Join[#, RandomInteger[8, 3]] &)⟹(* add more values *)
 MaybeEcho⟹(* display current value *)
 (Maybe @ Map[If[# < 0.4, None, #] &, #] &)(* map values that are too small to None *)

(* {0.61,0.48,0.92,0.9,0.32,0.11,4,4,0}
 None *)

The result is None because:

  1. the data has a number that is too small, and
  2. the definition of MaybeBind stops the pipeline aggressively using a FreeQ[_,None] test.

Monad laws verification

Let us convince ourselves that the current definition of MaybeBind gives a monad.

The verification is straightforward to program and shows that the implemented Maybe monad adheres to the monad laws.

"Monad-laws-table-Maybe"

“Monad-laws-table-Maybe”

Extensions with polymorphic behavior

We can see from formulas (1) and (2) that the monad codes can be easily extended through overloading the pipeline functions.

For example the extension of the Maybe monad to handle of Dataset objects is fairly easy and straightforward.

Here is the formula of the Maybe monad pipeline extended with Dataset objects:

Here is an example of a polymorphic function definition for the Maybe monad:

MaybeFilter[filterFunc_][xs_] := Maybe@Select[xs, filterFunc[#] &];

MaybeFilter[critFunc_][xs_Dataset] := Maybe@xs[Select[critFunc]];

See [AA7] for more detailed examples of polymorphism in monadic programming with Mathematica / WL.

A complete discussion can be found in [H3]. (The main message of [H3] is the poly-functional and polymorphic properties of monad implementations.)

Polymorphic monads in R’s dplyr

The R package dplyr, [R1], has implementations centered around monadic polymorphic behavior. The command pipelines based on dplyrcan work on R data frames, SQL tables, and Spark data frames without changes.

Here is a diagram of a typical work-flow with dplyr:

"dplyr-pipeline"

The diagram shows how a pipeline made with dplyr can be re-run (or reused) for data stored in different data structures.

Monad code generation

We can see monad code definitions like the ones for Maybe as some sort of initial templates for monads that can be extended in specific ways depending on their applications. Mathematica / WL can easily provide code generation for such templates; (see [WL1]). As it was mentioned in the introduction, we do not deal with types for monads explicitly, we generate code for monads instead.

In this section are given examples with packages that generate monad codes. The case study sections have examples of packages that utilize generated monad codes.

Maybe monads code generation

The package [AA2] provides a Maybe code generator that takes as an argument a prefix for the generated functions. (Monad code generation is discussed further in the section “General work-flow of monad code generation utilization”.)

Here is an example:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MaybeMonadCodeGenerator.m"]

GenerateMaybeMonadCode["AnotherMaybe"]

data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};

AnotherMaybeUnit[data]⟹(* lift data into the monad *)
 (AnotherMaybe@Join[#, RandomInteger[8, 3]] &)⟹(* add more values *)
 AnotherMaybeEcho⟹(* display current value *)
 (AnotherMaybe @ Map[If[# < 0.4, None, #] &, #] &)(* map values that are too small to None *)

(* {0.61,0.48,0.92,0.9,0.32,0.11,8,7,6}
   AnotherMaybeBind: Failure when applying: Function[AnotherMaybe[Map[Function[If[Less[Slot[1], 0.4], None, Slot[1]]], Slot[1]]]]
   None *)

We see that we get the same result as above (None) and a message prompting failure.

State monads code generation

The State monad is also basic and its programming in Mathematica / WL is not that difficult. (See [AA3].)

Here is the special version of the generic pipeline formula (1) for the State monad:

"Monad-formula-State"

“Monad-formula-State”

Note that since the State monad pipeline caries both a value and a state, it is a good idea to have functions that manipulate them separately. For example, we can have functions for context modification and context retrieval. (These are done in [AA3].)

This loads the package [AA3]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/StateMonadCodeGenerator.m"]

This generates the State monad for the prefix “StMon”:

GenerateStateMonadCode["StMon"]

The following StMon pipeline code starts with a random matrix and then replaces numbers in the current pipeline value according to a threshold parameter kept in the context. Several times are invoked functions for context deposit and retrieval.

SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, {3, 2}], <|"mark" -> "TooSmall", "threshold" -> 0.5|>]⟹
  StMonEchoValue⟹
  StMonEchoContext⟹
  StMonAddToContext["data"]⟹
  StMonEchoContext⟹
  (StMon[#1 /. (x_ /; x < #2["threshold"] :> #2["mark"]), #2] &)⟹
  StMonEchoValue⟹
  StMonRetrieveFromContext["data"]⟹
  StMonEchoValue⟹
  StMonRetrieveFromContext["mark"]⟹
  StMonEchoValue;

(* value: {{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}
   context: <|mark->TooSmall,threshold->0.5|>
   context: <|mark->TooSmall,threshold->0.5,data->{{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}|>
   value: {{0.789884,0.831468},{TooSmall,0.50537},{TooSmall,TooSmall}}
   value: {{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}
   value: TooSmall *)

Flow control in monads

We can implement dedicated functions for governing the pipeline flow in a monad.

Let us look at a breakdown of these kind of functions using the State monad StMon generated above.

Optional acceptance of a function result

A basic and simple pipeline control function is for optional acceptance of result — if failure is obtained applying f then we ignore its result (and keep the current pipeline value.)

Here is an example with StMonOption :

SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, 5]]⟹
 StMonEchoValue⟹
 StMonOption[If[# < 0.3, None] & /@ # &]⟹
 StMonEchoValue

(* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
   value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
   StMon[{0.789884, 0.831468, 0.421298, 0.50537, 0.0375957}, <||>] *)

Without StMonOption we get failure:

SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, 5]]⟹
 StMonEchoValue⟹
 (If[# < 0.3, None] & /@ # &)⟹
 StMonEchoValue

(* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
   StMonBind: Failure when applying: Function[Map[Function[If[Less[Slot[1], 0.3], None]], Slot[1]]]
   None *)

Conditional execution of functions

It is natural to want to have the ability to chose a pipeline function application based on a condition.

This can be done with the functions StMonIfElse and StMonWhen.

SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, 5]]⟹
 StMonEchoValue⟹
 StMonIfElse[
  Or @@ (# < 0.4 & /@ #) &,
  (Echo["A too small value is present.", "warning:"]; 
    StMon[Style[#1, Red], #2]) &,
  StMon[Style[#1, Blue], #2] &]⟹
 StMonEchoValue

 (* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
    warning: A too small value is present.
    value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
    StMon[{0.789884,0.831468,0.421298,0.50537,0.0375957},<||>] *)

Remark: Using flow control functions like StMonIfElse and StMonWhen with appropriate messages is a better way of handling computations that might fail. The silent failures handling of the basic Maybe monad is convenient only in a small number of use cases.

Iterative functions

The last group of pipeline flow control functions we consider comprises iterative functions that provide the functionalities of Nest, NestWhile, FoldList, etc.

In [AA3] these functionalities are provided through the function StMonIterate.

Here is a basic example using Nest that corresponds to Nest[#+1&,1,3]:

StMonUnit[1]⟹StMonIterate[Nest, (StMon[#1 + 1, #2]) &, 3]

(* StMon[4, <||>] *)

Consider this command that uses the full signature of NestWhileList:

NestWhileList[# + 1 &, 1, # < 10 &, 1, 4]

(* {1, 2, 3, 4, 5} *)

Here is the corresponding StMon iteration code:

StMonUnit[1]⟹StMonIterate[NestWhileList, (StMon[#1 + 1, #2]) &, (#[[1]] < 10) &, 1, 4]

(* StMon[{1, 2, 3, 4, 5}, <||>] *)

Here is another results accumulation example with FixedPointList :

StMonUnit[1.]⟹
 StMonIterate[FixedPointList, (StMon[(#1 + 2/#1)/2, #2]) &]

(* StMon[{1., 1.5, 1.41667, 1.41422, 1.41421, 1.41421, 1.41421}, <||>] *)

When the functions NestList, NestWhileList, FixedPointList are used with StMonIterate their results can be stored in the context. Here is an example:

StMonUnit[1.]⟹
 StMonIterate[FixedPointList, (StMon[(#1 + 2/#1)/2, #2]) &, "fpData"]

(* StMon[{1., 1.5, 1.41667, 1.41422, 1.41421, 1.41421, 1.41421}, <|"fpData" -> {StMon[1., <||>], 
    StMon[1.5, <||>], StMon[1.41667, <||>], StMon[1.41422, <||>], StMon[1.41421, <||>], 
    StMon[1.41421, <||>], StMon[1.41421, <||>]} |>] *)

More elaborate tests can be found in [AA8].

Partial pipelines

Because of the associativity law we can design pipeline flows based on functions made of “sub-pipelines.”

fEcho = Function[{x, ct}, StMonUnit[x, ct]⟹StMonEchoValue⟹StMonEchoContext];

fDIter = Function[{x, ct}, 
   StMonUnit[y^x, ct]⟹StMonIterate[FixedPointList, StMonUnit@D[#, y] &, 20]];

StMonUnit[7]⟹fEcho⟹fDIter⟹fEcho;

(*
  value: 7
  context: <||>
  value: {y^7,7 y^6,42 y^5,210 y^4,840 y^3,2520 y^2,5040 y,5040,0,0}
  context: <||> *)

General work-flow of monad code generation utilization

With the abilities to generate and utilize monad codes it is natural to consider the following work flow. (Also shown in the diagram below.)

  1. Come up with an idea that can be expressed with monadic programming.
  2. Look for suitable monad implementation.
  3. If there is no such implementation, make one (or two, or five.)
  4. Having a suitable monad implementation, generate the monad code.
  5. Implement additional pipeline functions addressing envisioned use cases.
  6. Start making pipelines for the problem domain of interest.
  7. Are the pipelines are satisfactory? If not go to 5. (Or 2.)

"make-monads"

Monad templates

The template nature of the general monads can be exemplified with the group of functions in the package StateMonadCodeGenerator.m, [4].

They are in five groups:

  1. base monad functions (unit testing, binding),
  2. display of the value and context,
  3. context manipulation (deposit, retrieval, modification),
  4. flow governing (optional new value, conditional function application, iteration),
  5. other convenience functions.

We can say that all monad implementations will have their own versions of these groups of functions. The more specialized monads will have functions specific to their intended use. Such special monads are discussed in the case study sections.

Software design with monadic programming

The application of monadic programming to a particular problem domain is very similar to designing a software framework or designing and implementing a Domain Specific Language (DSL).

The answers of the question “When to use monadic programming?” can form a large list. This section provides only a couple of general, personal viewpoints on monadic programming in software design and architecture. The principles of monadic programming can be used to build systems from scratch (like Haskell and Scala.) Here we discuss making specialized software with or within already existing systems.

Framework design

Software framework design is about architectural solutions that capture the commonality and variability in a problem domain in such a way that: 1) significant speed-up can be achieved when making new applications, and 2) a set of policies can be imposed on the new applications.

The rigidness of the framework provides and supports its flexibility — the framework has a backbone of rigid parts and a set of “hot spots” where new functionalities are plugged-in.

Usually Object-Oriented Programming (OOP) frameworks provide inversion of control — the general work-flow is already established, only parts of it are changed. (This is characterized with “leave the driving to us” and “don’t call us we will call you.”)

The point of utilizing monadic programming is to be able to easily create different new work-flows that share certain features. (The end user is the driver, on certain rail paths.)

In my opinion making a software framework of small to moderate size with monadic programming principles would produce a library of functions each with polymorphic behaviour that can be easily sequenced in monadic pipelines. This can be contrasted with OOP framework design in which we are more likely to end up with backbone structures that (i) are static and tree-like, and (ii) are extended or specialized by plugging-in relevant objects. (Those plugged-in objects themselves can be trees, but hopefully short ones.)

DSL development

Given a problem domain the general monad structure can be used to shape and guide the development of DSLs for that problem domain.

Generally, in order to make a DSL we have to choose the language syntax and grammar. Using monadic programming the syntax and grammar commands are clear. (The monad pipelines are the commands.) What is left is “just” the choice of particular functions and their implementations.

Another way to develop such a DSL is through a grammar of natural language commands. Generally speaking, just designing the grammar — without developing the corresponding interpreters — would be very helpful in figuring out the components at play. Monadic programming meshes very well with this approach and applying the two approaches together can be very fruitful.

Contextual monad classification (case study)

In this section we show an extension of the State monad into a monad aimed at machine learning classification work-flows.

Motivation

We want to provide a DSL for doing machine learning classification tasks that allows us:

  1. to do basic summarization and visualization of the data,
  2. to control splitting of the data into training and testing sets;
  3. to apply the built-in classifiers;
  4. to apply classifier ensembles (see [AA9] and [AA10]);
  5. to evaluate classifier performances with standard measures and
  6. ROC plots.

Also, we want the DSL design to provide clear directions how to add (hook-up or plug-in) new functionalities.

The package [AA4] discussed below provides such a DSL through monadic programming.

Package and data loading

This loads the package [AA4]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicContextualClassification.m"]

This gets some test data (the Titanic dataset):

dataName = "Titanic";
ds = Dataset[Flatten@*List @@@ ExampleData[{"MachineLearning", dataName}, "Data"]];
varNames = Flatten[List @@ ExampleData[{"MachineLearning", dataName}, "VariableDescriptions"]];
varNames = StringReplace[varNames, "passenger" ~~ (WhitespaceCharacter ..) -> ""];
If[dataName == "FisherIris", varNames = Most[varNames]];
ds = ds[All, AssociationThread[varNames -> #] &];

Monad design

The package [AA4] provides functions for the monad ClCon — the functions implemented in [AA4] have the prefix “ClCon”.

The classifier contexts are Association objects. The pipeline values can have the form:

ClCon[ val, context:(_String|_Association) ]

The ClCon specific monad functions deposit or retrieve values from the context with the keys: “trainData”, “testData”, “classifier”. The general idea is that if the current value of the pipeline cannot provide all arguments for a ClCon function, then the needed arguments are taken from the context. If that fails, then an message is issued. This is illustrated with the following pipeline with comments example.

"ClCon-basic-example"

The pipeline and results above demonstrate polymorphic behaviour over the classifier variable in the context: different functions are used if that variable is a ClassifierFunction object or an association of named ClassifierFunction objects.

Note the demonstrated granularity and sequentiality of the operations coming from using a monad structure. With those kind of operations it would be easy to make interpreters for natural language DSLs.

Another usage example

This monadic pipeline in this example goes through several stages: data summary, classifier training, evaluation, acceptance test, and if the results are rejected a new classifier is made with a different algorithm using the same data splitting. The context keeps track of the data and its splitting. That allows the conditional classifier switch to be concisely specified.

First let us define a function that takes a Classify method as an argument and makes a classifier and calculates performance measures.

ClSubPipe[method_String] :=
  Function[{x, ct},
   ClConUnit[x, ct]⟹
    ClConMakeClassifier[method]⟹
    ClConEchoFunctionContext["classifier:", 
     ClassifierInformation[#["classifier"], Method] &]⟹
    ClConEchoFunctionContext["training time:", ClassifierInformation[#["classifier"], "TrainingTime"] &]⟹
    ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall"}]⟹
    ClConEchoValue⟹
    ClConEchoFunctionContext[
     ClassifierMeasurements[#["classifier"], 
     ClConToNormalClassifierData[#["testData"]], "ROCCurve"] &]
   ];

Using the sub-pipeline function ClSubPipe we make the outlined pipeline.

SeedRandom[12]
res =
  ClConUnit[ds]⟹
   ClConSplitData[0.7]⟹
   ClConEchoFunctionValue["summaries:", ColumnForm[Normal[RecordsSummary /@ #]] &]⟹
   ClConEchoFunctionValue["xtabs:", 
    MatrixForm[CrossTensorate[Count == varNames[[1]] + varNames[[-1]], #]] & /@ # &]⟹
   ClSubPipe["LogisticRegression"]⟹
   (If[#1["Accuracy"] > 0.8,
      Echo["Good accuracy!", "Success:"]; ClConFail,
      Echo["Make a new classifier", "Inaccurate:"]; 
      ClConUnit[#1, #2]] &)⟹
   ClSubPipe["RandomForest"];

"ClCon-pipeline-2-output"

Tracing monad pipelines (case study)

The monadic implementations in the package MonadicTracing.m, [AA5] allow tracking of the pipeline execution of functions within other monads.

The primary reason for developing the package was the desire to have the ability to print a tabulated trace of code and comments using the usual monad pipeline notation. (I.e. without conversion to strings etc.)

It turned out that by programming MonadicTracing.m I came up with a monad transformer; see [Wk2], [H2].

Package loading

This loads the package [AA5]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

Usage example

This generates a Maybe monad to be used in the example (for the prefix “Perhaps”):

GenerateMaybeMonadCode["Perhaps"]
GenerateMaybeMonadSpecialCode["Perhaps"]

In following example we can see that pipeline functions of the Perhaps monad are interleaved with comment strings. Producing the grid of functions and comments happens “naturally” with the monad function TraceMonadEchoGrid.

data = RandomInteger[10, 15];

TraceMonadUnit[PerhapsUnit[data]]⟹"lift to monad"⟹
  TraceMonadEchoContext⟹
  PerhapsFilter[# > 3 &]⟹"filter current value"⟹
  PerhapsEcho⟹"display current value"⟹
  PerhapsWhen[#[[3]] > 3 &, 
   PerhapsEchoFunction[Style[#, Red] &]]⟹
  (Perhaps[#/4] &)⟹
  PerhapsEcho⟹"display current value again"⟹
  TraceMonadEchoGrid[Grid[#, Alignment -> Left] &];

Note that :

  1. the tracing is initiated by just using TraceMonadUnit;
  2. pipeline functions (actual code) and comments are interleaved;
  3. putting a comment string after a pipeline function is optional.

Another example is the ClCon pipeline in the sub-section “Monad design” in the previous section.

Summary

This document presents a style of using monadic programming in Wolfram Language (Mathematica). The style has some shortcomings, but it definitely provides convenient features for day-to-day programming and in coming up with architectural designs.

The style is based on WL’s basic language features. As a consequence it is fairly concise and produces light overhead.

Ideally, the packages for the code generation of the basic Maybe and State monads would serve as starting points for other more general or more specialized monadic programs.

References

Monadic programming

[Wk1] Wikipedia entry: Monad (functional programming), URL: https://en.wikipedia.org/wiki/Monad_(functional_programming) .

[Wk2] Wikipedia entry: Monad transformer, URL: https://en.wikipedia.org/wiki/Monad_transformer .

[Wk3] Wikipedia entry: Software Design Pattern, URL: https://en.wikipedia.org/wiki/Software_design_pattern .

[H1] Haskell.org article: Monad laws, URL: https://wiki.haskell.org/Monad_laws.

[H2] Sheng Liang, Paul Hudak, Mark Jones, “Monad transformers and modular interpreters”, (1995), Proceedings of the 22nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages. New York, NY: ACM. pp. 333[Dash]343. doi:10.1145/199448.199528.

[H3] Philip Wadler, “The essence of functional programming”, (1992), 19’th Annual Symposium on Principles of Programming Languages, Albuquerque, New Mexico, January 1992.

R

[R1] Hadley Wickham et al., dplyr: A Grammar of Data Manipulation, (2014), tidyverse at GitHub, URL: https://github.com/tidyverse/dplyr . (See also, http://dplyr.tidyverse.org .)

Mathematica / Wolfram Language

[WL1] Leonid Shifrin, “Metaprogramming in Wolfram Language”, (2012), Mathematica StackExchange. (Also posted at Wolfram Community in 2017.) URL of the Mathematica StackExchange answer: https://mathematica.stackexchange.com/a/2352/34008 . URL of the Wolfram Community post: http://community.wolfram.com/groups/-/m/t/1121273 .

MathematicaForPrediction

[AA1] Anton Antonov, “Implementation of Object-Oriented Programming Design Patterns in Mathematica”, (2016) MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction.

[AA2] Anton Antonov, Maybe monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MaybeMonadCodeGenerator.m .

[AA3] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

[AA4] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

[AA5] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

[AA6] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

[AA7] Anton Antonov, Simple monadic programming, (2017), MathematicaForPrediction at GitHub. (Preliminary version, 40% done.) URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/Documentation/Simple-monadic-programming.pdf .

[AA8] Anton Antonov, Generated State Monad Mathematica unit tests, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/GeneratedStateMonadTests.m .

[AA9] Anton Antonov, Classifier ensembles functions Mathematica package, (2016), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m .

[AA10] Anton Antonov, “ROC for classifier ensembles, bootstrapping, damaging, and interpolation”, (2016), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/ .

Comparison of dimension reduction algorithms over mandala images generation

Introduction

This document discusses concrete algorithms for two different approaches of generation of mandala images, [1]: direct construction with graphics primitives, and use of machine learning algorithms.

In the experiments described in this document better results were obtained with the direct algorithms. The direct algorithms were made for the Mathematica StackExchange question "Code that generates a mandala", [3].

The main goals of this document are:

  1. to show some pretty images exploiting symmetry and multiplicity (see this album),

  2. to provide an illustrative example of comparing dimension reduction methods,

  3. to give a set-up for further discussions and investigations on mandala creation with machine learning algorithms.

Two direct construction algorithms are given: one uses "seed" segment rotations, the other superimposing of layers of different types. The following plots show the order in which different mandala parts are created with each of the algorithms.

"Direct-Mandala-creation-algorithms-steps"

In this document we use several algorithms for dimension reduction applied to collections of images following the procedure described in [4,5]. We are going to show that with Non-Negative Matrix Factorization (NNMF) we can use mandalas made with the seed segment rotation algorithm to extract layer types and superimpose them to make colored mandalas. Using the same approach with Singular Value Decomposition (SVD) or Independent Component Analysis (ICA) does not produce good layers and the superimposition produces more "watered-down", less diverse mandalas.

From a more general perspective this document compares the statistical approach of "trying to see without looking" with the "direct simulation" approach. Another perspective is the creation of "design spaces"; see [6].

The idea of using machine learning algorithms is appealing because there is no need to make the mental effort of understanding, discerning, approximating, and programming the principles of mandala creation. We can "just" use a large collection of mandala images and generate new ones using the "internal knowledge" data of machine learning algorithms. For example, a Neural network system like Deep Dream, [2], might be made to dream of mandalas.

Direct algorithms for mandala generation

In this section we present two different algorithms for generating mandalas. The first sees a mandala as being generated by rotation of a "seed" segment. The second sees a mandala as being generated by different component layers. For other approaches see [3].

The request of [3] is for generation of mandalas for coloring by hand. That is why the mandala generation algorithms are in the grayscale space. Coloring the generated mandala images is a secondary task.

By seed segment rotations

One way to come up with mandalas is to generate a segment and then by appropriate number of rotations to produce a mandala.

Here is a function and an example of random segment (seed) generation:

Clear[MakeSeedSegment]
MakeSeedSegment[radius_, angle_, n_Integer: 10, 
   connectingFunc_: Polygon, keepGridPoints_: False] :=
  Block[{t},
   t = Table[
     Line[{radius*r*{Cos[angle], Sin[angle]}, {radius*r, 0}}], {r, 0, 1, 1/n}];
   Join[If[TrueQ[keepGridPoints], t, {}], {GrayLevel[0.25], 
     connectingFunc@RandomSample[Flatten[t /. Line[{x_, y_}] :> {x, y}, 1]]}]
   ];

seed = MakeSeedSegment[10, Pi/12, 10];
Graphics[seed, Frame -> True]
"Mandala-seed-segment"

This function can make a seed segment symmetric:

Clear[MakeSymmetric]
MakeSymmetric[seed_] := {seed, 
   GeometricTransformation[seed, ReflectionTransform[{0, 1}]]};

seed = MakeSymmetric[seed];
Graphics[seed, Frame -> True]
"Mandala-seed-segment-symmetric"

Using a seed we can generate mandalas with different specification signatures:

Clear[MakeMandala]
MakeMandala[opts : OptionsPattern[]] :=      
  MakeMandala[
   MakeSymmetric[
    MakeSeedSegment[20, Pi/12, 12, 
     RandomChoice[{Line, Polygon, BezierCurve, 
       FilledCurve[BezierCurve[#]] &}], False]], Pi/6, opts];

MakeMandala[seed_, angle_?NumericQ, opts : OptionsPattern[]] :=      
  Graphics[GeometricTransformation[seed, 
    Table[RotationMatrix[a], {a, 0, 2 Pi - angle, angle}]], opts];

This code randomly selects symmetricity and seed generation parameters (number of concentric circles, angles):

SeedRandom[6567]
n = 12;
Multicolumn@
 MapThread[
  Image@If[#1,
     MakeMandala[MakeSeedSegment[10, #2, #3], #2],
     MakeMandala[
      MakeSymmetric[MakeSeedSegment[10, #2, #3, #4, False]], 2 #2]
     ] &, {RandomChoice[{False, True}, n], 
   RandomChoice[{Pi/7, Pi/8, Pi/6}, n], 
   RandomInteger[{8, 14}, n], 
   RandomChoice[{Line, Polygon, BezierCurve, 
     FilledCurve[BezierCurve[#]] &}, n]}]
"Seed-segment-rotation-mandalas-complex-settings"

Here is a more concise way to generate symmetric segment mandalas:

Multicolumn[Table[Image@MakeMandala[], {12}], 5]
"Seed-segment-rotation-mandalas-simple-settings"

Note that with this approach the programming of the mandala coloring is not that trivial — weighted blending of colorized mandalas is the easiest thing to do. (Shown below.)

By layer types

This approach was given by Simon Woods in [3].

"For this one I’ve defined three types of layer, a flower, a simple circle and a ring of small circles. You could add more for greater variety."

The coloring approach with image blending given below did not work well for this algorithm, so I modified the original code in order to produce colored mandalas.

ClearAll[LayerFlower, LayerDisk, LayerSpots, MandalaByLayers]

LayerFlower[n_, a_, r_, colorSchemeInd_Integer] := 
  Module[{b = RandomChoice[{-1/(2 n), 0}]}, {If[
     colorSchemeInd == 0, White, 
     RandomChoice[ColorData[colorSchemeInd, "ColorList"]]], 
    Cases[ParametricPlot[
      r (a + Cos[n t])/(a + 1) {Cos[t + b Sin[2 n t]], Sin[t + b Sin[2 n t]]}, {t, 0, 2 Pi}], 
     l_Line :> FilledCurve[l], -1]}];

LayerDisk[_, _, r_, colorSchemeInd_Integer] := {If[colorSchemeInd == 0, White, 
    RandomChoice[ColorData[colorSchemeInd, "ColorList"]]], 
   Disk[{0, 0}, r]};

LayerSpots[n_, a_, r_, colorSchemeInd_Integer] := {If[colorSchemeInd == 0, White, 
    RandomChoice[ColorData[colorSchemeInd, "ColorList"]]], 
   Translate[Disk[{0, 0}, r a/(4 n)], r CirclePoints[n]]};

MandalaByLayers[n_, m_, coloring : (False | True) : False, opts : OptionsPattern[]] := 
  Graphics[{EdgeForm[Black], White, 
    Table[RandomChoice[{3, 2, 1} -> {LayerFlower, LayerDisk, LayerSpots}][n, RandomReal[{3, 5}], i, 
       If[coloring, RandomInteger[{1, 17}], 0]]~Rotate~(Pi i/n), {i, m, 1, -1}]}, opts];

Here are generated black-and-white mandalas.

SeedRandom[6567]
ImageCollage[Table[Image@MandalaByLayers[16, 20], {12}], Background -> White, ImagePadding -> 3, ImageSize -> 1200]
"Layer-types-superimposing-BW"

Here are some colored mandalas. (Which make me think more of Viking and Native American art than mandalas.)

ImageCollage[Table[Image@MandalaByLayers[16, 20, True], {12}], Background -> White, ImagePadding -> 3, ImageSize -> 1200]
"Layer-types-superimposing-colored"

Training data

Images by direct generation

iSize = 400;

SeedRandom[6567]
AbsoluteTiming[
 mandalaImages = 
   Table[Image[
     MakeMandala[
      MakeSymmetric@
       MakeSeedSegment[10, Pi/12, 12, RandomChoice[{Polygon, FilledCurve[BezierCurve[#]] &}]], Pi/6], 
     ImageSize -> {iSize, iSize}, ColorSpace -> "Grayscale"], {300}];
 ]

(* {39.31, Null} *)

ImageCollage[ColorNegate /@ RandomSample[mandalaImages, 12], Background -> White, ImagePadding -> 3, ImageSize -> 400]
"mandalaImages-sample"

External image data

See the section "Using World Wide Web images".

Direct blending

The most interesting results are obtained with the image blending procedure coded below over mandala images generated with the seed segment rotation algorithm.

SeedRandom[3488]
directBlendingImages = Table[
   RemoveBackground@
    ImageAdjust[
     Blend[Colorize[#, 
         ColorFunction -> 
          RandomChoice[{"IslandColors", "FruitPunchColors", 
            "AvocadoColors", "Rainbow"}]] & /@ 
       RandomChoice[mandalaImages, 4], RandomReal[1, 4]]], {36}];

ImageCollage[directBlendingImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]

"directBlendingImages-3488-36"

Dimension reduction algorithms application

In this section we are going to apply the dimension reduction algorithms Singular Value Decomposition (SVD), Independent Component Analysis (ICA), and Non-Negative Matrix Factorization (NNMF) to a linear vector space representation (a matrix) of an image dataset. In the next section we use the bases generated by those algorithms to make mandala images.
We are going to use the packages [7,8] for ICA and NNMF respectively.


Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/IndependentComponentAnalysis.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/NonNegativeMatrixFactorization.m"]

Linear vector space representation

The linear vector space representation of the images is simple — each image is flattened to a vector (row-wise), and the image vectors are put into a matrix.

mandalaMat = Flatten@*ImageData@*ColorNegate /@ mandalaImages;
Dimensions[mandalaMat]

(* {300, 160000} *)

Re-factoring and basis images

The following code re-factors the images matrix with SVD, ICA, and NNMF and extracts the basis images.

AbsoluteTiming[
 svdRes = SingularValueDecomposition[mandalaMat, 20];
]
(* {5.1123, Null} *)

svdBasisImages = Map[ImageAdjust@Image@Partition[#, iSize] &, Transpose@svdRes[[3]]];

AbsoluteTiming[
 icaRes = 
   IndependentComponentAnalysis[Transpose[mandalaMat], 20, 
    PrecisionGoal -> 4, "MaxSteps" -> 100];
]
(* {23.41, Null} *)

icaBasisImages = Map[ImageAdjust@Image@Partition[#, iSize] &, Transpose[icaRes[[1]]]];

SeedRandom[452992]
AbsoluteTiming[
 nnmfRes = 
   GDCLS[mandalaMat, 20, PrecisionGoal -> 4, 
    "MaxSteps" -> 20, "RegularizationParameter" -> 0.1];
 ]
(* {233.209, Null} *)

nnmfBasisImages = Map[ImageAdjust@Image@Partition[#, iSize] &, nnmfRes[[2]]];

Bases

Let us visualize the bases derived with the matrix factorization methods.

Grid[{{"SVD", "ICA", "NNMF"},
      Map[ImageCollage[#, Automatic, {400, 500}, 
        Background -> LightBlue, ImagePadding -> 5, ImageSize -> 350] &, 
      {svdBasisImages, icaBasisImages, nnmfBasisImages}]
     }, Dividers -> All]
"Mandala-SVD-ICA-NNMF-bases-20"

"Mandala-SVD-ICA-NNMF-bases-20"

Here are some observations for the bases.

  1. The SVD basis has an average mandala image as its first vector and the other vectors are "differences" to be added to that first vector.

  2. The SVD and ICA bases are structured similarly. That is because ICA and SVD are both based on orthogonality — ICA factorization uses an orthogonality criteria based on Gaussian noise properties (which is more relaxed than SVD’s standard orthogonality criteria.)

  3. As expected, the NNMF basis images have black background because of the enforced non-negativity. (Black corresponds to 0, white to 1.)

  4. Compared to the SVD and ICA bases the images of the NNMF basis are structured in a radial manner. This can be demonstrated using image binarization.

Grid[{{"SVD", "ICA", "NNMF"}, Map[ImageCollage[Binarize[#, 0.5] & /@ #, Automatic, {400, 500}, Background -> LightBlue, ImagePadding -> 5, ImageSize -> 350] &, {svdBasisImages, icaBasisImages, nnmfBasisImages}] }, Dividers -> All]
"Mandala-SVD-ICA-NNMF-bases-binarized-0.5-20"

We can see that binarizing of the NNMF basis images shows them as mandala layers. In other words, using NNMF we can convert the mandalas of the seed segment rotation algorithm into mandalas generated by an algorithm that superimposes layers of different types.

Blending with image bases samples

In this section we just show different blending images using the SVD, ICA, and NNMF bases.

Blending function definition

ClearAll[MandalaImageBlending]
Options[MandalaImageBlending] = {"BaseImage" -> {}, "BaseImageWeight" -> Automatic, "PostBlendingFunction" -> (RemoveBackground@*ImageAdjust)};
MandalaImageBlending[basisImages_, nSample_Integer: 4, n_Integer: 12, opts : OptionsPattern[]] :=      
  Block[{baseImage, baseImageWeight, postBlendingFunc, sImgs, sImgWeights},
   baseImage = OptionValue["BaseImage"];
   baseImageWeight = OptionValue["BaseImageWeight"];
   postBlendingFunc = OptionValue["PostBlendingFunction"];
   Table[(
     sImgs = 
      Flatten@Join[{baseImage}, RandomSample[basisImages, nSample]];
     If[NumericQ[baseImageWeight] && ImageQ[baseImage],
      sImgWeights = 
       Join[{baseImageWeight}, RandomReal[1, Length[sImgs] - 1]],
      sImgWeights = RandomReal[1, Length[sImgs]]
      ];
     postBlendingFunc@
      Blend[Colorize[#, 
          DeleteCases[{opts}, ("BaseImage" -> _) | ("BaseImageWeight" -> _) | ("PostBlendingFunction" -> _)],               
          ColorFunction -> 
           RandomChoice[{"IslandColors", "FruitPunchColors", 
             "AvocadoColors", "Rainbow"}]] & /@ sImgs, 
       sImgWeights]), {n}]
   ];

SVD image basis blending

SeedRandom[17643]
svdBlendedImages = MandalaImageBlending[Rest@svdBasisImages, 4, 24];
ImageCollage[svdBlendedImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]

"svdBlendedImages-17643-24"

SeedRandom[17643]
svdBlendedImages = MandalaImageBlending[Rest@svdBasisImages, 4, 24, "BaseImage" -> First[svdBasisImages], "BaseImageWeight" -> 0.5];
ImageCollage[svdBlendedImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]

"svdBlendedImages-baseImage-17643-24"

ICA image basis blending

SeedRandom[17643]
icaBlendedImages = MandalaImageBlending[Rest[icaBasisImages], 4, 36, "BaseImage" -> First[icaBasisImages], "BaseImageWeight" -> Automatic];
ImageCollage[icaBlendedImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]

"icaBlendedImages-17643-36"

NNMF image basis blending

SeedRandom[17643]
nnmfBlendedImages = MandalaImageBlending[nnmfBasisImages, 4, 36];
ImageCollage[nnmfBlendedImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]

"nnmfBlendedImages-17643-36"

Using World Wide Web images

A natural question to ask is:

What would be the outcomes of the above procedures to mandala images found in the World Wide Web (WWW) ?

Those WWW images are most likely man made or curated.

The short answer is that the results are not that good. Better results might be obtained using a larger set of WWW images (than just 100 in the experiment results shown below.)

Here is a sample from the WWW mandala images:

"wwwMandalaImages-sample-6

Here are the results obtained with NNMF basis:

"www-nnmfBlendedImages-12"

Future plans

My other motivation for writing this document is to set up a basis for further investigations and discussions on the following topics.

  1. Having a large image database of "real world", human made mandalas.

  2. Utilization of Neural Network algorithms to mandala creation.

  3. Utilization of Cellular Automata to mandala generation.

  4. Investigate mandala morphing and animations.

  5. Making a domain specific language of specifications for mandala creation and modification.

The idea of using machine learning algorithms for mandala image generation was further supported by an image classifier that recognizes fairly well (suitably normalized) mandala images obtained in different ways:

"Mandalas-classifer-measurements-matrix"

References

[1] Wikipedia entry: Mandala, https://en.wikipedia.org/wiki/Mandala .

[2] Wikipedia entry: DeepDream, https://en.wikipedia.org/wiki/DeepDream .

[3] "Code that generates a mandala", Mathematica StackExchange, http://mathematica.stackexchange.com/q/136974 .

[4] Anton Antonov, "Comparison of PCA and NNMF over image de-noising", (2016), MathematicaForPrediction at WordPress blog. URL: https://mathematicaforprediction.wordpress.com/2016/05/07/comparison-of-pca-and-nnmf-over-image-de-noising/ .

[5] Anton Antonov, "Handwritten digits recognition by matrix factorization", (2016), MathematicaForPrediction at WordPress blog. URL: https://mathematicaforprediction.wordpress.com/2016/11/12/handwritten-digits-recognition-by-matrix-factorization/ .

[6] Chris Carlson, "Social Exploration of Design Spaces: A Proposal", (2016), Wolfram Technology Conference 2016. URL: http://wac .36f4.edgecastcdn.net/0036F4/pub/www.wolfram.com/technology-conference/2016/SocialExplorationOfDesignSpaces.nb , YouTube: https://www.youtube.com/watch?v=YK2523nfcms .

[7] Anton Antonov, Independent Component Analysis Mathematica package, (2016), source code at MathematicaForPrediction at GitHub, package IndependentComponentAnalysis.m .

[8] Anton Antonov, Implementation of the Non-Negative Matrix Factorization algorithm in Mathematica, (2013), source code at MathematicaForPrediction at GitHub, package NonNegativeMatrixFactorization.m.

Tries with frequencies in Java

Introduction

This blog post describes the installation and use in Mathematica of Tries with frequencies [1] implemented in Java [2] through a corresponding Mathematica package [3].

Prefix tree or Trie, [6], is a tree data structure that stores a set of "words" that consist of "characters" — each element can be seen as a key to itself. The article [1] and packages [2,3,4] extend that data structure to have additional data (frequencies) associated with each key.

The packages [2,3] work with lists of strings only. The package [4] can work with more general data but it is much slower.

The main motivation to create the package [3] was to bring the fast Trie functions implementations of [2] into Mathematica in order to prototype, implement, and experiment with different text processing algorithms. (Like, inductive grammar parsers generation and entity name recognition.) The speed of combining [2] and [3] is evaluated in the section "Performance tests" below.

Set-up

This following directory path has to have the jar file "TriesWithFrequencies.jar".

$JavaTriesWithFrequenciesPath = 
  "/Users/antonov/MathFiles/MathematicaForPrediction/Java/TriesWithFrequencies";
FileExistsQ[
 FileNameJoin[{$JavaTriesWithFrequenciesPath, "TriesWithFrequencies.jar"}]]

(* True *)

For more details see the explanations in the README file in the GitHub directory of [2].

The following directory is expected to have the Mathematica package [3].

dirName = "/Users/antonov/MathFiles/MathematicaForPrediction";
FileExistsQ[FileNameJoin[{dirName, "JavaTriesWithFrequencies.m"}]]

(* True *)

AppendTo[$Path, dirName];
Needs["JavaTriesWithFrequencies`"]

This commands installs Java (via JLink`) and loads the necessary Java libraries.

JavaTrieInstall[$JavaTriesWithFrequenciesPath]

Basic examples

For brevity the basic examples are not included in this blog post. Here is album of images that shows the "JavaTrie.*" commands with their effects:

"JavaTrieExample" .

More detailed explanations can be found in the Markdown document, [7]:

Next, we are going to look into performance evaluation examples (also given in [7].)

Membership of words

Assume we want find the words of "Hamlet" that are not in the book "Origin of Species". This section shows that the Java trie creation and query times for this task are quite small.

Read words

The following code reads the words in the texts. We get 33000 words from "Hamlet" and 151000 words from "Origin of Species".

hWords =
  Block[{words},
   words = 
    StringSplit[
     ExampleData[{"Text", "Hamlet"}], {Whitespace, 
      PunctuationCharacter}];
   words = Select[ToLowerCase[words], StringLength[#] > 0 &]
   ];
Length[hWords]

(* 32832 *)

osWords =
  Block[{words},
   words = 
    StringSplit[
     ExampleData[{"Text", "OriginOfSpecies"}], {Whitespace, 
      PunctuationCharacter}];
   words = Select[ToLowerCase[words], StringLength[#] > 0 &]
   ];
Length[osWords]

(* 151205 *)

Membership

First we create trie with "Origin of species" words:

AbsoluteTiming[
 jOStr = JavaTrieCreateBySplit[osWords];
]

(* {0.682531, Null} *)

Sanity check — the "Origin of species" words are in the trie:

AbsoluteTiming[
 And @@ JavaObjectToExpression[
   JavaTrieContains[jOStr, Characters /@ osWords]]
]

(* {1.32224, True} *)

Membership of "Hamlet" words into "Origin of Species":

AbsoluteTiming[
 res = JavaObjectToExpression[
    JavaTrieContains[jOStr, Characters /@ hWords]];
]

(* {0.265307, Null} *)

Tallies of belonging:

Tally[res]

(* {{True, 24924}, {False, 7908}} *)

Sample of words from "Hamlet" that do not belong to "Origin of Species":

RandomSample[Pick[hWords, Not /@ res], 30]

(* {"rosencrantz", "your", "mar", "airy", "rub", "honesty", \
"ambassadors", "oph", "returns", "pale", "virtue", "laertes", \
"villain", "ham", "earnest", "trail", "unhand", "quit", "your", \
"your", "fishmonger", "groaning", "your", "wake", "thou", "liest", \
"polonius", "upshot", "drowned", "grosser"} *)

Common words sample:

RandomSample[Pick[hWords, res], 30]

(* {"as", "indeed", "it", "with", "wild", "will", "to", "good", "so", \
"dirt", "the", "come", "not", "or", "but", "the", "why", "my", "to", \
"he", "and", "you", "it", "to", "potent", "said", "the", "are", \
"question", "soft"} *)

Statistics

The node counts statistics calculation is fast:

AbsoluteTiming[
 JavaTrieNodeCounts[jOStr]
]

(* {0.002344, <|"total" -> 20723, "internal" -> 15484, "leaves" -> 5239|>} *)

The node counts statistics computation after shrinking is comparably fast :

AbsoluteTiming[
 JavaTrieNodeCounts[JavaTrieShrink[jOStr]]
]

(* {0.00539, <|"total" -> 8918,  "internal" -> 3679, "leaves" -> 5239|>} *)

The conversion of a large trie to JSON and computing statistics over the obtained tree is reasonably fast:

AbsoluteTiming[
 res = JavaTrieToJSON[jOStr];
]

(* {0.557221, Null} *)

AbsoluteTiming[
 Quantile[
  Cases[res, ("value" -> v_) :> v, \[Infinity]], 
  Range[0, 1, 0.1]]
]

(* {0.019644, {1., 1., 1., 1., 2., 3., 5., 9., 17., 42., 151205.}} *)

Dictionary infixes

Get all words from a dictionary:

allWords =  DictionaryLookup["*"];
allWords // Length

(* 92518 *)

Trie creation and shrinking:

AbsoluteTiming[
 jDTrie = JavaTrieCreateBySplit[allWords];
 jDShTrie = JavaTrieShrink[jDTrie];
]

(* {0.30508, Null} *)

JSON form extraction:

AbsoluteTiming[
 jsonRes = JavaTrieToJSON[jDShTrie];
]

(* {3.85955, Null} *)

Here are the node statistics of the original and shrunk tries:

"Orginal-trie-vs-Shrunk-trie-Node-Counts"

Find the infixes that have more than three characters and appear more than 10 times:

Multicolumn[#, 4] &@
 Select[SortBy[
   Tally[Cases[
     jsonRes, ("key" -> v_) :> v, Infinity]], -#[[-1]] &], StringLength[#[[1]]] > 3 && #[[2]] > 10 &]
"Long-infixes-in-shrunk-dictionary-trie"

Unit tests

Many of example shown in this document have corresponding tests in the file JavaTriesWithFrequencies-Unit-Tests.wlt hosted at GitHub.

tr = TestReport[
  dirName <> "/UnitTests/JavaTriesWithFrequencies-Unit-Tests.wlt"]
"TestReport"

References

[1] Anton Antonov, "Tries with frequencies for data mining", (2013), MathematicaForPrediction at WordPress blog. URL: https://mathematicaforprediction.wordpress.com/2013/12/06/tries-with-frequencies-for-data-mining/ .

[2] Anton Antonov, Tries with frequencies in Java, (2017), source code at MathematicaForPrediction at GitHub, project Java/TriesWithFrequencies.

[3] Anton Antonov, Java tries with frequencies Mathematica package, (2017), source code at MathematicaForPrediction at GitHub, package JavaTriesWithFrequencies.m .

[4] Anton Antonov, Tries with frequencies Mathematica package, (2013), source code at MathematicaForPrediction at GitHub, package TriesWithFrequencies.m .

[5] Anton Antonov, Java tries with frequencies Mathematica unit tests, (2017), source code at MathematicaForPrediction at GitHub, unit tests file JavaTriesWithFrequencies-Unit-Tests.wlt .

[6] Wikipedia, Trie, https://en.wikipedia.org/wiki/Trie .

[7] Anton Antonov, "Tries with frequencies in Java", (2017), MathematicaForPrediction at GitHub.

Pareto principle adherence examples

This post (document) is made to provide examples of the Pareto principle manifestation in different datasets.

The Pareto principle is an interesting law that manifests in many contexts. It is also known as "Pareto law", "the law of significant few", "the 80-20 rule".

For example:

  • "80% of the land is owned by 20% of the population",

  • "10% of all lakes contain 90% of all lake water."

For extensive discussion and studied examples see the Wikipedia entry "Pareto principle", [4].

It is a good idea to see for which parts of the analyzed data the Pareto principle manifests. Testing for the Pareto principle is usually simple. For example, assume that we have the GDP of all countries:

countries = CountryData["Countries"];
gdps = {CountryData[#, "Name"], CountryData[#, "GDP"]} & /@ countries;
gdps = DeleteCases[gdps, {_, _Missing}] /. Quantity[x_, _] :> x;

Grid[{RecordsSummary[gdps, {"country", "GDP"}]}, Alignment -> Top, Dividers -> All]

GDPUnsorted1

In order to test for the manifestation of the Pareto principle we have to (i) sort the GDP values in descending order, (ii) find the cumulative sums, (iii) normalize the obtained vector by the sum of all values, and (iv) plot the result. These steps are done with the following two commands:

t = Reverse@Sort@gdps[[All, 2]];
ListPlot[Accumulate[t]/Total[t], PlotRange -> All, GridLines -> {{0.2} Length[t], {0.8}}, Frame -> True]

GDPPlot1

In this document we are going to use the special function ParetoLawPlot defined in the next section and the package [1]. Most of the examples use data that is internally accessible within Mathematica. Several external data examples are considered.

See the package [1] for the function RecordsSummary. See the source file [2] for R functions that facilitate the plotting of Pareto principle graphs. See the package [3] for the outlier detection functions used below.

Definitions

This simple function makes a list plot that would help assessing the manifestation of the Pareto principle. It takes the same options as ListPlot.

Clear[ParetoLawPlot]
Options[ParetoLawPlot] = Options[ListPlot];
ParetoLawPlot[dataVec : {_?NumberQ ..}, opts : OptionsPattern[]] := ParetoLawPlot[{Tooltip[dataVec, 1]}, opts];
ParetoLawPlot[dataVecs : {{_?NumberQ ..} ..}, opts : OptionsPattern[]] := ParetoLawPlot[MapThread[Tooltip, {dataVecs, Range[Length[dataVecs]]}], opts];
ParetoLawPlot[dataVecs : {Tooltip[{_?NumberQ ..}, _] ..}, opts : OptionsPattern[]] :=
  Block[{t, mc = 0.5},
   t = Map[Tooltip[(Accumulate[#]/Total[#] &)[SortBy[#[[1]], -# &]], #[[2]]] &, dataVecs];
   ListPlot[t, opts, PlotRange -> All, GridLines -> {Length[t[[1, 1]]] Range[0.1, mc, 0.1], {0.8}}, Frame -> True, FrameTicks -> {{Automatic, Automatic}, {Automatic, Table[{Length[t[[1, 1]]] c, ToString[Round[100 c]] <> "%"}, {c, Range[0.1, mc, 0.1]}]}}]
  ];

This function is useful for coloring the outliers in the list plots.

ClearAll[ColorPlotOutliers]
ColorPlotOutliers[] := # /. {Point[ps_] :> {Point[ps], Red, Point[ps[[OutlierPosition[ps[[All, 2]]]]]]}} &;
ColorPlotOutliers[oid_] := # /. {Point[ps_] :> {Point[ps], Red, Point[ps[[OutlierPosition[ps[[All, 2]], oid]]]]}} &;

These definitions can be also obtained by loading the packages MathematicaForPredictionUtilities.m and OutlierIdentifiers.m; see [1,3].

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MathematicaForPredictionUtilities.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/OutlierIdentifiers.m"]

Units

Below we are going to use the metric system of units. (If preferred we can easily switch to the imperial system.)

$UnitSystem = "Metric";(*"Imperial"*)

CountryData

We are going to consider a typical Pareto principle example — weatlh of income distribution.

GDP

This code find the Gross Domestic Product (GDP) of different countries:

gdps = {CountryData[#, "Name"], CountryData[#, "GDP"]} & /@CountryData["Countries"];
gdps = DeleteCases[gdps, {_, _Missing}] /. Quantity[x_, _] :> x;

The corresponding Pareto plot (note the default grid lines) shows that 10% of countries have 90% of the wealth:

ParetoLawPlot[gdps[[All, 2]], ImageSize -> 400]

GDPPlot2

Here is the log histogram of the GDP values.

Histogram[Log10@gdps[[All, 2]], 20, PlotRange -> All]

GDPHistogram1

The following code shows the log plot of countries GDP values and the found outliers.

Manipulate[
 DynamicModule[{data = Transpose[{Range[Length[gdps]], Sort[gdps[[All, 2]]]}], pos},
  pos = OutlierPosition[modFunc@data[[All, 2]], tb@*opar];
  If[Length[pos] > 0,
   ListLogPlot[{data, data[[pos]]}, PlotRange -> All, PlotTheme -> "Detailed", FrameLabel -> {"Index", "GDP"}, PlotLegends -> SwatchLegend[{"All", "Outliers"}]],
   ListLogPlot[{data}, PlotRange -> All, PlotTheme -> "Detailed", FrameLabel -> {"Index", "GDP"}, PlotLegends -> SwatchLegend[{"All", "Outliers"}]]
  ]
 ],
 {{opar, SPLUSQuartileIdentifierParameters, "outliers detector"}, {HampelIdentifierParameters, SPLUSQuartileIdentifierParameters}},
 {{tb, TopOutliers, "bottom|top"}, {BottomOutliers, TopOutliers}},
 {{modFunc, Identity, "data modifier function"}, {Identity, Log}}
]

Outliers1

This table gives the values for countries with highest GDP.

Block[{data = gdps[[OutlierPosition[gdps[[All, 2]], TopOutliers@*SPLUSQuartileIdentifierParameters]]]},
 Row[Riffle[#, " "]] &@Map[Grid[#, Dividers -> All, Alignment -> {Left, "."}] &, Partition[SortBy[data, -#[[-1]] &], Floor[Length[data]/3]]]
]

HighestGDP1

Population

Similar data retrieval and plots can be made for countries populations.

pops = {CountryData[#, "Name"], CountryData[#, "Population"]} & /@CountryData["Countries"];
unit = QuantityUnit[pops[[All, 2]]][[1]];
pops = DeleteCases[pops, {_, _Missing}] /. Quantity[x_, _] :> x;

In the following Pareto plot we can see that 15% of countries have 80% of the total population:

ParetoLawPlot[pops[[All, 2]], PlotLabel -> Row[{"Population", ", ", unit}]]

PopPlot1

Here are the countries with most people:

Block[{data = pops[[OutlierPosition[pops[[All, 2]], TopOutliers@*SPLUSQuartileIdentifierParameters]]]},
 Row[Riffle[#, " "]] &@Map[Grid[#, Dividers -> All, Alignment -> {Left, "."}] &, Partition[SortBy[data, -#[[-1]] &], Floor[Length[data]/3]]]
]

HighestPop1

Area

We can also see that the Pareto principle holds for the countries areas:

areas = {CountryData[#, "Name"], CountryData[#, "Area"]} & /@CountryData["Countries"];
areas = DeleteCases[areas, {_, _Missing}] /. Quantity[x_, _] :> x;
ParetoLawPlot[areas[[All, 2]]]

AreaPlot1

Block[{data = areas[[OutlierPosition[areas[[All, 2]], TopOutliers@*SPLUSQuartileIdentifierParameters]]]},
 Row[Riffle[#, " "]] &@Map[Grid[#, Dividers -> All, Alignment -> {Left, "."}] &, Partition[SortBy[data, -#[[-1]] &], Floor[Length[data]/3]]]
]

HighestArea1

Time series-wise

An interesting diagram is to plot together the curves of GDP changes for different countries. We can see China and Poland have had rapid growth.

res = Table[
    (t = CountryData[countryName, {{"GDP"}, {1970, 2015}}];
     t = Reverse@Sort[t["Path"][[All, 2]] /. Quantity[x_, _] :> x];
     Tooltip[t, countryName])
    , {countryName, {"USA", "China", "Poland", "Germany", "France", "Denmark"}}];

ParetoLawPlot[res, PlotRange -> All, Joined -> True, PlotLegends -> res[[All, 2]]]

GDPGrowth1

Manipulate

This dynamic interface can be used for a given country to compare (i) the GDP evolution in time and (ii) the corresponding Pareto plot.

Manipulate[
 DynamicModule[{ts, t},
  ts = CountryData[countryName, {{"GDP"}, {1970, 2015}}];
  t = Reverse@Sort[ts["Path"][[All, 2]] /. Quantity[x_, _] :> x];
  Grid[{{"Date list plot of GDP values", "GDP Pareto plot"}, {DateListPlot[ts, ImageSize -> Medium],
     ParetoLawPlot[t, ImageSize -> Medium]}}]
 ], {countryName, {"USA", "China", "Poland", "Germany", "France", "Denmark"}}]

GDPGrowth2

Country flag colors

The following code demonstrates that the colors of the pixels in country flags also adhere to the Pareto principle.

flags = CountryData[#, "Flag"] & /@ CountryData["Countries"];

flags[[1 ;; 12]]

Flags1

ids = ImageData /@ flags;

pixels = Apply[Join, Flatten[ids, 1]];

Clear[ToBinFunc]
ToBinFunc[x_] := Evaluate[Piecewise[MapIndexed[{#2[[1]], #1[[1]] < x <= #1[[2]]} &, Partition[Range[0, 1, 0.1], 2, 1]]]];

pixelsInt = Transpose@Table[Map[ToBinFunc, pixels[[All, i]]], {i, 1, 3}];

pixelsIntTally = SortBy[Tally[pixelsInt], -#[[-1]] &];

ParetoLawPlot[pixelsIntTally[[All, 2]]]

FlagsPlot1

TunnelData

Loking at lengths in the tunnel data we can see the manifestation of an exaggerated Pareto principle.

tunnelLengths = TunnelData[All, {"Name", "Length"}];
tunnelLengths // Length

(* 1552 *)

t = Reverse[Sort[DeleteMissing[tunnelLengths[[All, -1]]] /. Quantity[x_, _] :> x]];

ParetoLawPlot[t]

TunnelsPlot1

Here is the logarithmic histogram of the lengths:

Histogram[Log10@t, PlotRange -> All, PlotTheme -> "Detailed"]

TunnelsHist1

LakeData

The following code gathers the data and make the Pareto plots surface areas, volumes, and fish catch values for lakes. We can that the lakes volumes show exaggerated Pareto principle.

lakeAreas = LakeData[All, "SurfaceArea"];
lakeVolumes = LakeData[All, "Volume"];
lakeFishCatch = LakeData[All, "CommercialFishCatch"];

data = {lakeAreas, lakeVolumes, lakeFishCatch};
t = N@Map[DeleteMissing, data] /. Quantity[x_, _] :> x;

opts = {PlotRange -> All, ImageSize -> Medium}; MapThread[ParetoLawPlot[#1, PlotLabel -> Row[{#2, ", ", #3}], opts] &, {t, {"Lake area", "Lake volume", "Commercial fish catch"}, DeleteMissing[#][[1, 2]] & /@ data}]

LakesPlot1

City data

One of the examples given in [5] is that the city areas obey the Power Law. Since the Pareto principle is a kind of Power Law we can confirm that observation using Pareto principle plots.

The following grid of Pareto principle plots is for areas and population sizes of cities in selected states of USA.

res = Table[
    (cities = CityData[{All, stateName, "USA"}];
     t = Transpose@Outer[CityData, cities, {"Area", "Population"}];
     t = Map[DeleteMissing[#] /. Quantity[x_, _] :> x &, t, {1}];
     ParetoLawPlot[MapThread[Tooltip, {t, {"Area", "Population"}}], PlotLabel -> stateName, ImageSize -> 250])
    , {stateName, {"Alabama", "California", "Florida", "Georgia", "Illinois", "Iowa", "Kentucky", "Ohio", "Tennessee"}}];

Legended[Grid[ArrayReshape[res, {3, 3}]], SwatchLegend[Cases[res[[1]], _RGBColor, Infinity], {"Area", "Population"}]]

CitiesPlot1

Movie ratings in MovieLens datasets

Looking into the MovieLens 20M dataset, [6], we can see that the Pareto princple holds for (1) most rated movies and (2) most active users. We can also see the manifestation of an exaggerated Pareto law — 90% of all ratings are for 10% of the movies.

"MovieLens20M-MDensity-and-Pareto-plots"

"MovieLens20M-MDensity-and-Pareto-plots"

The following plot taken from the blog post "PIN analysis", [7], shows that the four digit passwords people use adhere to the Pareto principle: the first 20% of (the unique) most frequently used passwords correspond to the 70% of all passwords use.

ColorNegate[Import["http://www.datagenetics.com/blog/september32012/c.png"]]

Cumulative-4-Digit-Password-Usages-ColorNegated

References

[1] Anton Antonov, "MathematicaForPrediction utilities", (2014), source code MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction, package MathematicaForPredictionUtilities.m.

[2] Anton Antonov, Pareto principle functions in R, source code MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction, source code file ParetoLawFunctions.R .

[3] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub, URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/OutlierIdentifiers.m .

[4] Wikipedia entry, "Pareto principle", URL: https://en.wikipedia.org/wiki/Pareto_principle .

[5] Wikipedia entry, "Power law", URL: https://en.wikipedia.org/wiki/Power_law .

[6] GroupLens Research, MovieLens 20M Dataset, (2015).

[7] "PIN analysis", (2012), DataGenetics.