A monad for Quantile Regression workflows

Introduction

In this document we describe the design and implementation of a (software programming) monad for Quantile Regression workflows specification and execution. The design and implementation are done with Mathematica / Wolfram Language (WL).

What is Quantile Regression? : Assume we have a set of two dimensional points each point being a pair of an independent variable value and a dependent variable value. We want to find a curve that is a function of the independent variable that splits the points in such a way that, say, 30% of the points are above that curve. This is done with Quantile Regression, see [Wk2, CN1, AA2, AA3]. Quantile Regression is a method to estimate the variable relations for all parts of the distribution. (Not just, say, the mean of the relationships found with Least Squares Regression.)

The goal of the monad design is to make the specification of Quantile Regression workflows (relatively) easy, straightforward, by following a certain main scenario and specifying variations over that scenario. Since Quantile Regression is often compared with Least Squares Regression and some type of filtering (like, Moving Average) those functionalities should be included in the monad design scenarios.

The monad is named QRegMon and it is based on the State monad package "StateMonadCodeGenerator.m", [AAp1, AA1] and the Quantile Regression package "QuantileRegression.m", [AAp4, AA2].

The data for this document is read from WL’s repository or created ad-hoc.

The monadic programming design is used as a Software Design Pattern. The QRegMon monad can be also seen as a Domain Specific Language (DSL) for the specification and programming of machine learning classification workflows.

Here is an example of using the QRMon monad over heteroscedastic data::

QRMon-introduction-monad-pipeline-example-table

QRMon-introduction-monad-pipeline-example-table

QRMon-introduction-monad-pipeline-example-echo

QRMon-introduction-monad-pipeline-example-echo

The table above is produced with the package "MonadicTracing.m", [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad QRMon can be seen as a DSL. Because of this the monad pipelines made with QRMon are sometimes called "specifications".

Remark: With "regression quantile" we mean "a curve or function that is computed with Quantile Regression".

Contents description

The document has the following structure.

  • The sections "Package load" and "Data load" obtain the needed code and data.
  • The sections "Design consideration" and "Monad design" provide motivation and design decisions rationale.

  • The sections "QRMon overview" and "Monad elements" provide technical description of the QRMon monad needed to utilize it.

    • (Using a fair amount of examples.)
  • The section "Unit tests" describes the tests used in the development of the QRMon monad.
    • (The random pipelines unit tests are especially interesting.)
  • The section "Future plans" outlines future directions of development.
  • The section "Implementation notes" just says that QRMon’s development process and this document follow the ones of the classifications workflows monad ClCon, [AA6].

Remark: One can read only the sections "Introduction", "Design consideration", "Monad design", and "QRMon overview". That set of sections provide a fairly good, programming language agnostic exposition of the substance and novel ideas of this document.

The table above is produced with the package "MonadicTracing.m", [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad QRMon can be seen as a DSL. Because of this the monad pipelines made with QRMon are sometimes called "specifications".

Remark: With "regression quantile" we mean "a curve or function that is computed with Quantile Regression".

Contents description

The document has the following structure.

  • The sections "Package load" and "Data load" obtain the needed code and data.
  • The sections "Design consideration" and "Monad design" provide motivation and design decisions rationale.

  • The sections "QRMon overview" and "Monad elements" provide technical description of the QRMon monad needed to utilize it.

    • (Using a fair amount of examples.)
  • The section "Unit tests" describes the tests used in the development of the QRMon monad.
    • (The random pipelines unit tests are especially interesting.)
  • The section "Future plans" outlines future directions of development.
  • The section "Implementation notes" just says that QRMon’s development process and this document follow the ones of the classifications workflows monad ClCon, [AA6].

Remark: One can read only the sections "Introduction", "Design consideration", "Monad design", and "QRMon overview". That set of sections provide a fairly good, programming language agnostic exposition of the substance and novel ideas of this document.

Package load

The following commands load the packages [AAp1–AAp6]:

Import["https://raw.githubusercontent.com/antononcube/\
MathematicaForPrediction/master/MonadicProgramming/\
MonadicQuantileRegression.m"]
Import["https://raw.githubusercontent.com/antononcube/\
MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

Data load

In this section we load data that is used in the rest of the document. The time series data is obtained through WL’s repository.

The data summarization and plots are done through QRMon, which in turn uses the function RecordsSummary from the package "MathematicaForPredictionUtilities.m", [AAp6].

Distribution data

The following data is generated to have [heteroscedasticity(https://en.wikipedia.org/wiki/Heteroscedasticity).

distData = 
  Table[{x, 
    Exp[-x^2] + 
     RandomVariate[
      NormalDistribution[0, .15 Sqrt[Abs[1.5 - x]/1.5]]]}, {x, -3, 
    3, .01}];
Length[distData]

(* 601 *)

QRMonUnit[distData]⟹QRMonEchoDataSummary⟹QRMonPlot;
QRMon-distData

QRMon-distData

Temperature time series

tsData = WeatherData[{"Orlando", "USA"}, "Temperature", {{2015, 1, 1}, {2018, 1, 1}, "Day"}]

QRMonUnit[tsData]⟹QRMonEchoDataSummary⟹QRMonDateListPlot;
QRMon-tsData

QRMon-tsData

Financial time series

The following data is typical for financial time series. (Note the differences with the temperature time series.)

finData = TimeSeries[FinancialData["NYSE:GE", {{2014, 1, 1}, {2018, 1, 1}, "Day"}]];

QRMonUnit[finData]⟹QRMonEchoDataSummary⟹QRMonDateListPlot;
QRMon-finData

QRMon-finData

Design considerations

The steps of the main regression workflow addressed in this document follow.

  1. Retrieving data from a data repository.

  2. Optionally, transform the data.

    1. Delete rows with missing fields.

    2. Rescale data along one or both of the axes.

    3. Apply moving average (or median, or map.)

  3. Verify assumptions of the data.

  4. Run a regression algorithm with a certain basis of functions using:

    1. Quantile Regression, or

    2. Least Squares Regression.

  5. Visualize the data and regression functions.

  6. If the regression functions fit is not satisfactory go to step 4.

  7. Utilize the found regression functions to compute:

    1. outliers,

    2. local extrema,

    3. approximation or fitting errors,

    4. conditional density distributions,

    5. time series simulations.

The following flow-chart corresponds to the list of steps above.

Quantile-regression-workflow-extended

Quantile-regression-workflow-extended

In order to address:

  • the introduction of new elements in regression workflows,

  • workflows elements variability, and

  • workflows iterative changes and refining,

it is beneficial to have a DSL for regression workflows. We choose to make such a DSL through a functional programming monad, [Wk1, AA1].

Here is a quote from [Wk1] that fairly well describes why we choose to make a classification workflow monad and hints on the desired properties of such a monad.

[…] The monad represents computations with a sequential structure: a monad defines what it means to chain operations together. This enables the programmer to build pipelines that process data in a series of steps (i.e. a series of actions applied to the data), in which each action is decorated with the additional processing rules provided by the monad. […] Monads allow a programming style where programs are written by putting together highly composable parts, combining in flexible ways the possible actions that can work on a particular type of data. […]

Remark: Note that quote from [Wk1] refers to chained monadic operations as "pipelines". We use the terms "monad pipeline" and "pipeline" below.

Monad design

The monad we consider is designed to speed-up the programming of quantile regression workflows outlined in the previous section. The monad is named QRMon for "Quantile Regression Monad".

We want to be able to construct monad pipelines of the general form:

QRMon-formula-1

QRMon-formula-1

QRMon is based on the State monad, [Wk1, AA1], so the monad pipeline form (1) has the following more specific form:

QRMon-formula-2

QRMon-formula-2

This means that some monad operations will not just change the pipeline value but they will also change the pipeline context.

In the monad pipelines of QRMon we store different objects in the contexts for at least one of the following two reasons.

  1. The object will be needed later on in the pipeline, or

  2. The object is (relatively) hard to compute.

Such objects are transformed data, regression functions, and outliers.

Let us list the desired properties of the monad.

  • Rapid specification of non-trivial quantile regression workflows.

  • The monad works with time series, numerical matrices, and numerical vectors.

  • The pipeline values can be of different types. Most monad functions modify the pipeline value; some modify the context; some just echo results.

  • The monad can do quantile regression with B-Splines bases, quantile regression fit and least squares fit with specified bases of functions.

  • The monad allows of cursory examination and summarization of the data.

  • It is easy to obtain the pipeline value, context, and different context objects for manipulation outside of the monad.

  • It is easy to plot different combinations of data, regression functions, outliers, approximation errors, etc.

The QRMon components and their interactions are fairly simple.

The main QRMon operations implicitly put in the context or utilize from the context the following objects:

  • (time series) data,

  • regression functions,

  • outliers and outlier regression functions.

Note the that the monadic set of types of QRMon pipeline values is fairly heterogenous and certain awareness of "the current pipeline value" is assumed when composing QRMon pipelines.

Obviously, we can put in the context any object through the generic operations of the State monad of the package "StateMonadGenerator.m", [AAp1].

QRMon overview

When using a monad we lift certain data into the "monad space", using monad’s operations we navigate computations in that space, and at some point we take results from it.

With the approach taken in this document the "lifting" into the QRMon monad is done with the function QRMonUnit. Results from the monad can be obtained with the functions QRMonTakeValue, QRMonContext, or with the other QRMon functions with the prefix "QRMonTake" (see below.)

Here is a corresponding diagram of a generic computation with the QRMon monad:

QRMon-pipeline

QRMon-pipeline

Remark: It is a good idea to compare the diagram with formulas (1) and (2).

Let us examine a concrete QRMon pipeline that corresponds to the diagram above. In the following table each pipeline operation is combined together with a short explanation and the context keys after its execution.

Here is the output of the pipeline:

The QRMon functions are separated into four groups:

  • operations,

  • setters and droppers,

  • takers,

  • State Monad generic functions.

An overview of the those functions is given in the tables in next two sub-sections. The next section, "Monad elements", gives details and examples for the usage of the QRMon operations.

Monad functions interaction with the pipeline value and context

The following table gives an overview the interaction of the QRMon monad functions with the pipeline value and context.

QRMon-monad-functions-overview-table

QRMon-monad-functions-overview-table

The following table shows the functions that are function synonyms or short-cuts.

QRMon-monad-functions-shortcuts-table

QRMon-monad-functions-shortcuts-table

State monad functions

Here are the QRMon State Monad functions (generated using the prefix "QRMon", [AAp1, AA1]):

QRMon-StMon-functions-overview-table

QRMon-StMon-functions-overview-table

Monad elements

In this section we show that QRMon has all of the properties listed in the previous section.

The monad head

The monad head is QRMon. Anything wrapped in QRMon can serve as monad’s pipeline value. It is better though to use the constructor QRMonUnit. (Which adheres to the definition in [Wk1].)

QRMon[{{1, 223}, {2, 323}}, <||>]⟹QRMonEchoDataSummary;
The-monad-head-output

The-monad-head-output

Lifting data to the monad

The function lifting the data into the monad QRMon is QRMonUnit.

The lifting to the monad marks the beginning of the monadic pipeline. It can be done with data or without data. Examples follow.

QRMonUnit[distData]⟹QRMonEchoDataSummary;
Lifting-data-to-the-monad-output

Lifting-data-to-the-monad-output

QRMonUnit[]⟹QRMonSetData[distData]⟹QRMonEchoDataSummary;
Lifting-data-to-the-monad-output

Lifting-data-to-the-monad-output

(See the sub-section "Setters, droppers, and takers" for more details of setting and taking values in QRMon contexts.)

Currently the monad can deal with data in the following forms:

  • time series,

  • numerical vectors,

  • numerical matrices of rank two.

When the data lifted to the monad is a numerical vector vec it is assumed that vec has to become the second column of a "time series" matrix; the first column is derived with Range[Length[vec]] .

Generally, WL makes it easy to extract columns datasets order to obtain numerical matrices, so datasets are not currently supported in QRMon.

Quantile regression with B-splines

This computes quantile regression with B-spline basis over 12 regularly spaced knots. (Using Linear Programming algorithms; see [AA2] for details.)

QRMonUnit[distData]⟹
  QRMonQuantileRegression[12]⟹
  QRMonPlot;
Quantile-regression-with-B-splines-output-1

Quantile-regression-with-B-splines-output-1

The monad function QRMonQuantileRegression has the same options as QuantileRegression. (The default value for option Method is different, since using "CLP" is generally faster.)

Options[QRMonQuantileRegression]

(* {InterpolationOrder -> 3, Method -> {LinearProgramming, Method -> "CLP"}} *)

Let us compute regression using a list of particular knots, specified quantiles, and the method "InteriorPoint" (instead of the Linear Programming library CLP):

p =
  QRMonUnit[distData]⟹
   QRMonQuantileRegression[{-3, -2, 1, 0, 1, 1.5, 2.5, 3}, Range[0.1, 0.9, 0.2], Method -> {LinearProgramming, Method -> "InteriorPoint"}]⟹
   QRMonPlot;
Quantile-regression-with-B-splines-output-2

Quantile-regression-with-B-splines-output-2

Remark: As it was mentioned above the function QRMonRegression is a synonym of QRMonQuantileRegression.

The fit functions can be extracted from the monad with QRMonTakeRegressionFunctions, which gives an association of quantiles and pure functions.

ListPlot[# /@ distData[[All, 1]]] & /@ (p⟹QRMonTakeRegressionFunctions)
Quantile-regression-with-B-splines-output-3

Quantile-regression-with-B-splines-output-3

Quantile regression fit and Least squares fit

Instead of using a B-spline basis of functions we can compute a fit with our own basis of functions.

Here is a basis functions:

bFuncs = Table[PDF[NormalDistribution[m, 1], x], {m, Min[distData[[All, 1]]], Max[distData[[All, 1]]], 1}];
Plot[bFuncs, {x, Min[distData[[All, 1]]], Max[distData[[All, 1]]]}, 
 PlotRange -> All, PlotTheme -> "Scientific"]
Quantile-regression-fit-and-Least-squares-fit-basis

Quantile-regression-fit-and-Least-squares-fit-basis

Here we do a Quantile Regression fit, a Least Squares fit, and plot the results:

p =
  QRMonUnit[distData]⟹
   QRMonQuantileRegressionFit[bFuncs]⟹
   QRMonLeastSquaresFit[bFuncs]⟹
   QRMonPlot;
   
Quantile-regression-fit-and-Least-squares-fit-output-1

Quantile-regression-fit-and-Least-squares-fit-output-1

Remark: The functions "QRMon*Fit" should generally have a second argument for the symbol of the basis functions independent variable. Often that symbol can be omitted and implied. (Which can be seen in the pipeline above.)

Remark: As it was mentioned above the function QRMonRegressionFit is a synonym of QRMonQuantileRegressionFit and QRMonFit is a synonym of QRMonLeastSquaresFit.

As it was pointed out in the previous sub-section, the fit functions can be extracted from the monad with QRMonTakeRegressionFunctions. Here the keys of the returned/taken association consist of quantiles and "mean" since we applied both Quantile Regression and Least Squares Regression.

ListPlot[# /@ distData[[All, 1]]] & /@ (p⟹QRMonTakeRegressionFunctions)
Quantile-regression-fit-and-Least-squares-fit-output-2

Quantile-regression-fit-and-Least-squares-fit-output-2

Default basis to fit (using Chebyshev polynomials)

One of the main advantages of using the function QuanileRegression of the package [AAp4] is that the functions used to do the regression with are specified with a few numeric parameters. (Most often only the number of knots is sufficient.) This is achieved by using a basis of B-spline functions of a certain interpolation order.

We want similar behaviour for Quantile Regression fitting we need to select a certain well known basis with certain desirable properties. Such basis is given by Chebyshev polynomials of first kind [Wk3]. Chebyshev polynomials bases can be easily generated in Mathematica with the functions ChebyshevT or ChebyshevU.

Here is an application of fitting with a basis of 12 Chebyshev polynomials of first kind:

QRMonUnit[distData]⟹
  QRMonQuantileRegressionFit[12]⟹
  QRMonLeastSquaresFit[12]⟹
  QRMonPlot;
Default-basis-to-fit-output-1-and-2

Default-basis-to-fit-output-1-and-2

The code above is equivalent to the following code:

bfuncs = Table[ChebyshevT[i, Rescale[x, MinMax[distData[[All, 1]]], {-0.95, 0.95}]], {i, 0, 12}];

p =
  QRMonUnit[distData]⟹
   QRMonQuantileRegressionFit[bfuncs]⟹
   QRMonLeastSquaresFit[bfuncs]⟹
   QRMonPlot;
Default-basis-to-fit-output-1-and-2

Default-basis-to-fit-output-1-and-2

The shrinking of the ChebyshevT domain seen in the definitions of bfuncs is done in order to prevent approximation error effects at the ends of the data domain. The following code uses the ChebyshevT domain { − 1, 1} instead of the domain { − 0.95, 0.95} used above.

QRMonUnit[distData]⟹
  QRMonQuantileRegressionFit[{4, {-1, 1}}]⟹
  QRMonPlot;
Default-basis-to-fit-output-3

Default-basis-to-fit-output-3

Regression functions evaluation

The computed quantile and least squares regression functions can be evaluated with the monad function QRMonEvaluate.

Evaluation for a given value of the independent variable:

p⟹QRMonEvaluate[0.12]⟹QRMonTakeValue

(* <|0.25 -> 0.930402, 0.5 -> 1.01411, 0.75 -> 1.08075, "mean" -> 0.996963|> *)

Evaluation for a vector of values:

p⟹QRMonEvaluate[Range[-1, 1, 0.5]]⟹QRMonTakeValue

(* <|0.25 -> {0.258241, 0.677461, 0.943299, 0.703812, 0.293741}, 
     0.5 -> {0.350025, 0.768617, 1.02311, 0.807879, 0.374545}, 
     0.75 -> {0.499338, 0.912183, 1.10325, 0.856729, 0.431227}, 
     "mean" -> {0.355353, 0.776006, 1.01118, 0.783304, 0.363172}|> *)

Evaluation for complicated lists of numbers:

p⟹QRMonEvaluate[{0, 1, {1.5, 1.4}}]⟹QRMonTakeValue

(* <|0.25 -> {0.943299, 0.293741, {0.0762883, 0.10759}}, 
     0.5 -> {1.02311, 0.374545, {0.103386, 0.139142}}, 
     0.75 -> {1.10325, 0.431227, {0.133755, 0.177161}}, 
     "mean" -> {1.01118, 0.363172, {0.107989, 0.142021}}|> *)
   

The obtained values can be used to compute estimates of the distributions of the dependent variable. See the sub-sections "Estimating conditional distributions" and "Dependent variable simulation".

Errors and error plots

Here with "errors" we mean the differences between data’s dependent variable values and the corresponding values calculated with the fitted regression curves.

In the pipeline below we compute couple of regression quantiles, plot them together with the data, we plot the errors, compute the errors, and summarize them.

QRMonUnit[finData]⟹
  QRMonQuantileRegression[10, {0.5, 0.1}]⟹
  QRMonDateListPlot[Joined -> False]⟹
  QRMonErrorPlots["DateListPlot" -> True, Joined -> False]⟹
  QRMonErrors⟹
  QRMonEchoFunctionValue["Errors summary:", RecordsSummary[#[[All, 2]]] & /@ # &];
Errors-and-error-plots-output-1

Errors-and-error-plots-output-1

Each of the functions QRMonErrors and QRMonErrorPlots computes the errors. (That computation is considered cheap.)

Finding outliers

Finding outliers can be done with the function QRMonOultiers. The outliers found by QRMonOutliers are simply points that below or above certain regression quantile curves, for example, the ones corresponding to 0.02 and 0.98.

Here is an example:

p =
  QRMonUnit[distData]⟹
   QRMonQuantileRegression[6, {0.02, 0.98}]⟹
   QRMonOutliers⟹
   QRMonEchoValue⟹
   QRMonOutliersPlot;
Finding-outliers-output-1

Finding-outliers-output-1

The function QRMonOutliers puts in the context values for the keys "outliers" and "outlierRegressionFunctions". The former is for the found outliers, the latter is for the functions corresponding to the used regression quantiles.

Keys[p⟹QRMonTakeContext]

(* {"data", "regressionFunctions", "outliers", "outlierRegressionFunctions"} *)

Here are the corresponding quantiles of the plot above:

Keys[p⟹QRMonTakeOutlierRegressionFunctions]

(* {0.02, 0.98} *)

The control of the outliers computation is done though the arguments and options of QRMonQuantileRegression (or the rest of the regression calculation functions.)

If only one regression quantile is found in the context and the corresponding quantile is less than 0.5 then QRMonOutliers finds only bottom outliers. If only one regression quantile is found in the context and the corresponding quantile is greater than 0.5 then QRMonOutliers finds only top outliers.

Here is an example for finding only the top outliers:

QRMonUnit[finData]⟹
  QRMonQuantileRegression[5, 0.8]⟹
  QRMonOutliers⟹
  QRMonEchoFunctionContext["outlier quantiles:", Keys[#outlierRegressionFunctions] &]⟹
  QRMonOutliersPlot["DateListPlot" -> True];
  
Finding-outliers-output-2

Finding-outliers-output-2

Plotting outliers

The function QRMonOutliersPlot makes an outliers plot. If the outliers are not in the context then QRMonOutliersPlot calls QRMonOutliers first.

Here are the options of QRMonOutliersPlot:

Options[QRMonOutliersPlot]

(* {"Echo" -> True, "DateListPlot" -> False, ListPlot -> {Joined -> False}, Plot -> {}} *)

The default behavior is to echo the plot. That can be suppressed with the option "Echo".

QRMonOutliersPlot utilizes combines with Show two plots:

  • one with ListPlot (or DateListPlot) for the data and the outliers,

  • the other with Plot for the regression quantiles used to find the outliers.

That is why separate lists of options can be given to manipulate those two plots. The option DateListPlot can be used make plots with date or time axes.

QRMonUnit[tsData]⟹
 QRMonQuantileRegression[12, {0.01, 0.99}]⟹
 QRMonOutliersPlot[
  "Echo" -> False,
  "DateListPlot" -> True,
  ListPlot -> {PlotStyle -> {Green, {PointSize[0.02], 
       Red}, {PointSize[0.02], Blue}}, Joined -> False, 
    PlotTheme -> "Grid"},
  Plot -> {PlotStyle -> Orange}]⟹
 QRMonTakeValue
 
Plotting-outliers-output-2

Plotting-outliers-output-2

Estimating conditional distributions

Consider the following problem:

How to estimate the conditional density of the dependent variable given a value of the conditioning independent variable?

(In other words, find the distribution of the y-values for a given, fixed x-value.)

The solution of this problem using Quantile Regression is discussed in detail in [PG1] and [AA4].

Finding a solution for this problem can be seen as a primary motivation to develop Quantile Regression algorithms.

The following pipeline (i) computes and plots a set of five regression quantiles and (ii) then using the found regression quantiles computes and plots the conditional distributions for two focus points (−2 and 1.)

QRMonUnit[distData]⟹
  QRMonQuantileRegression[6, 
   Range[0.1, 0.9, 0.2]]⟹
  QRMonPlot[GridLines -> {{-2, 1}, None}]⟹
  QRMonConditionalCDF[{-2, 1}]⟹
  QRMonConditionalCDFPlot;
Estimating-conditional-distributions-output-1

Estimating-conditional-distributions-output-1

Moving average, moving median, and moving map

Fairly often it is a good idea for a given time series to apply filter functions like Moving Average or Moving Median. We might want to:

  • visualize the obtained transformed data,

  • do regression over the transformed data,

  • compare with regression curves over the original data.

For these reasons QRMon has the functions QRMonMovingAverage, QRMonMovingMedian, and QRMonMovingMap that correspond to the built-in functions MovingAverage, MovingMedian, and MovingMap.

Here is an example:

QRMonUnit[tsData]⟹
  QRMonDateListPlot[ImageSize -> Small]⟹
  QRMonMovingAverage[20]⟹
  QRMonEchoFunctionValue["Moving avg: ", DateListPlot[#, ImageSize -> Small] &]⟹
  QRMonMovingMap[Mean, Quantity[20, "Days"]]⟹
  QRMonEchoFunctionValue["Moving map: ", DateListPlot[#, ImageSize -> Small] &];
Moving-average-moving-median-and-moving-map-output-1

Moving-average-moving-median-and-moving-map-output-1

Dependent variable simulation

Consider the problem of making a time series that is a simulation of a process given with a known time series.

More formally,

  • we are given a time-axis grid (regular or irregular),

  • we consider each grid node to correspond to a random variable,

  • we want to generate time series based on the empirical CDF’s of the random variables that correspond to the grid nodes.

The formulation of the problem hints to an (almost) straightforward implementation using Quantile Regression.

p = QRMonUnit[tsData]⟹QRMonQuantileRegression[30, Join[{0.01}, Range[0.1, 0.9, 0.1], {0.99}]];

tsNew =
  p⟹
   QRMonSimulate[1000]⟹
   QRMonTakeValue;

opts = {ImageSize -> Medium, PlotTheme -> "Detailed"};
GraphicsGrid[{{DateListPlot[tsData, PlotLabel -> "Actual", opts],
    DateListPlot[tsNew, PlotLabel -> "Simulated", opts]}}]
Dependent-variable-simulation-output-1

Dependent-variable-simulation-output-1

Finding local extrema in noisy data

Using regression fitting — and Quantile Regression in particular — we can easily construct semi-symbolic algorithms for finding local extrema in noisy time series data; see [AA5]. The QRMon function with such an algorithm is QRMonLocalExtrema.

In brief, the algorithm steps are as follows. (For more details see [AA5].)

  1. Fit a polynomial through the data.

  2. Find the local extrema of the fitted polynomial. (We will call them fit estimated extrema.)

  3. Around each of the fit estimated extrema find the most extreme point in the data by a nearest neighbors search (by using Nearest).

The function QRMonLocalExtrema uses the regression quantiles previously found in the monad pipeline (and stored in the context.) The bottom regression quantile is used for finding local minima, the top regression quantile is used for finding the local maxima.

An example of finding local extrema follows.

QRMonUnit[TimeSeriesWindow[tsData, {{2015, 1, 1}, {2018, 12, 31}}]]⟹
  QRMonQuantileRegression[10, {0.05, 0.95}]⟹
  QRMonDateListPlot[Joined -> False, PlotTheme -> "Scientific"]⟹
  QRMonLocalExtrema["NumberOfProximityPoints" -> 100]⟹
  QRMonEchoValue⟹
  QRMonAddToContext⟹
  QRMonEchoFunctionContext[
   DateListPlot[{#localMinima, #localMaxima, #data}, 
     PlotStyle -> {PointSize[0.015], PointSize[0.015], Gray}, 
     Joined -> False, 
     PlotLegends -> {"localMinima", "localMaxima", "data"}, 
     PlotTheme -> "Scientific"] &];
Finding-local-extrema-in-noisy-data-output-1

Finding-local-extrema-in-noisy-data-output-1

Note that in the pipeline above in order to plot the data and local extrema together some additional steps are needed. The result of QRMonLocalExtrema becomes the pipeline value; that pipeline value is displayed with QRMonEchoValue, and stored in the context with QRMonAddToContext. If the pipeline value is an association — which is the case here — the monad function QRMonAddToContext joins that association with the context association. In this case this means that we will have key-value elements in the context for "localMinima" and "localMaxima". The date list plot at the end of the pipeline uses values of those context keys (together with the value for "data".)

Setters, droppers, and takers

The values from the monad context can be set, obtained, or dropped with the corresponding "setter", "dropper", and "taker" functions as summarized in a previous section.

For example:

p = QRMonUnit[distData]⟹QRMonQuantileRegressionFit[2];

p⟹QRMonTakeRegressionFunctions

(* <|0.25 -> (0.0191185 + 0.00669159 #1 + 3.05509*10^-14 #1^2 &), 
     0.5 -> (0.191408 + 9.4728*10^-14 #1 + 3.02272*10^-14 #1^2 &), 
     0.75 -> (0.563422 + 3.8079*10^-11 #1 + 7.63637*10^-14 #1^2 &)|> *)
     

If other values are put in the context they can be obtained through the (generic) function QRMonTakeContext, [AAp1]:

p = QRMonUnit[RandomReal[1, {2, 2}]]⟹QRMonAddToContext["data"];

(p⟹QRMonTakeContext)["data"]

(* {{0.608789, 0.741599}, {0.877074, 0.861554}} *)

Another generic function from [AAp1] is QRMonTakeValue (used many times above.)

Here is an example of the "data dropper" QRMonDropData:

p⟹QRMonDropData⟹QRMonTakeContext

(* <||> *)

(The "droppers" simply use the state monad function QRMonDropFromContext, [AAp1]. For example, QRMonDropData is equivalent to QRMonDropFromContext["data"].)

Unit tests

The development of QRMon was done with two types of unit tests: (i) directly specified tests, [AAp7], and (ii) tests based on randomly generated pipelines, [AA8].

The unit test package should be further extended in order to provide better coverage of the functionalities and illustrate — and postulate — pipeline behavior.

Directly specified tests

Here we run the unit tests file "MonadicQuantileRegression-Unit-Tests.wlt", [AAp7]:

AbsoluteTiming[
 testObject = TestReport["~/MathematicaForPrediction/UnitTests/MonadicQuantileRegression-Unit-Tests.wlt"]
]
Unit-tests-output-1

Unit-tests-output-1

The natural language derived test ID’s should give a fairly good idea of the functionalities covered in [AAp3].

Values[Map[#["TestID"] &, testObject["TestResults"]]]

(* {"LoadPackage", "GenerateData", "QuantileRegression-1", \
"QuantileRegression-2", "QuantileRegression-3", \
"QuantileRegression-and-Fit-1", "Fit-and-QuantileRegression-1", \
"QuantileRegressionFit-and-Fit-1", "Fit-and-QuantileRegressionFit-1", \
"Outliers-1", "Outliers-2", "GridSequence-1", "BandsSequence-1", \
"ConditionalCDF-1", "Evaluate-1", "Evaluate-2", "Evaluate-3", \
"Simulate-1", "Simulate-2", "Simulate-3"} *)

Random pipelines tests

Since the monad QRMon is a DSL it is natural to test it with a large number of randomly generated "sentences" of that DSL. For the QRMon DSL the sentences are QRMon pipelines. The package "MonadicQuantileRegressionRandomPipelinesUnitTests.m", [AAp8], has functions for generation of QRMon random pipelines and running them as verification tests. A short example follows.

Generate pipelines:

SeedRandom[234]
pipelines = MakeQRMonRandomPipelines[100];
Length[pipelines]

(* 100 *)

Here is a sample of the generated pipelines:

(* 
Block[{DoubleLongRightArrow, pipelines = RandomSample[pipelines, 6]}, 
 Clear[DoubleLongRightArrow];
 pipelines = pipelines /. {_TemporalData -> "tsData", _?MatrixQ -> "distData"};
 GridTableForm[Map[List@ToString[DoubleLongRightArrow @@ #, FormatType -> StandardForm] &, pipelines], TableHeadings -> {"pipeline"}]
 ]
AutoCollapse[] *)
Unit-tests-random-pipelines-sample

Unit-tests-random-pipelines-sample

Here we run the pipelines as unit tests:

AbsoluteTiming[
 res = TestRunQRMonPipelines[pipelines, "Echo" -> False];
]

From the test report results we see that a dozen tests failed with messages, all of the rest passed.

rpTRObj = TestReport[res]

(The message failures, of course, have to be examined — some bugs were found in that way. Currently the actual test messages are expected.)

Future plans

Workflow operations

A list of possible, additional workflow operations and improvements follows.

  • Certain improvements can be done over the specification of the different plot options.

  • It will be useful to develop a function for automatic finding of over-fitting parameters.

  • The time series simulation should be done by aggregation of similar time intervals.

    • For example, for time series with span several years, for each month name is made Quantile Regression simulation and the results are spliced to obtain a one year simulation.
  • If the time series is represented as a sequence of categorical values, then the time series simulation can use Bayesian probabilities derived from sub-sequences.
    • QRMon already has functions that facilitate that, QRMonGridSequence and QRMonBandsSequence.

Conversational agent

Using the packages [AAp10, AAp11] we can generate QRMon pipelines with natural commands. The plan is to develop and document those functionalities further.

Here is an example of a pipeline constructed with natural language commands:

QRMonUnit[distData]⟹
  ToQRMonPipelineFunction["show data summary"]⟹
  ToQRMonPipelineFunction["calculate quantile regression for quantiles 0.2, 0.8 and with 40 knots"]⟹
  ToQRMonPipelineFunction["plot"];
Future-plans-conversational-agent-output-1

Future-plans-conversational-agent-output-1

Implementation notes

The implementation methodology of the QRMon monad packages [AAp3, AAp8] followed the methodology created for the ClCon monad package [AAp9, AA6]. Similarly, this document closely follows the structure and exposition of the ClCon monad document "A monad for classification workflows", [AA6].

A lot of the functionalities and signatures of QRMon were designed and programed through considerations of natural language commands specifications given to a specialized conversational agent. (As discussed in the previous section.)

References

Packages

[AAp1] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

[AAp2] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

[AAp3] Anton Antonov, Monadic Quantile Regression Mathematica package, (2018), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicQuantileRegression.m.

[AAp4] Anton Antonov, Quantile regression Mathematica package, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/QuantileRegression.m .

[AAp5] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

[AAp6] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

[AAp7] Anton Antonov, Monadic Quantile Regression unit tests, (2018), MathematicaVsR at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicQuantileRegression-Unit-Tests.wlt .

[AAp8] Anton Antonov, Monadic Quantile Regression random pipelines Mathematica unit tests, (2018), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicQuantileRegressionRandomPipelinesUnitTests.m .

[AAp9] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

ConverationalAgents Packages

[AAp10] Anton Antonov, Time series workflows grammar in EBNF, (2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

[AAp11] Anton Antonov, QRMon translator Mathematica package,(2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

MathematicaForPrediction articles

[AA1] Anton Antonov, "Monad code generation and extension", (2017), MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction.

[AA2] Anton Antonov, "Quantile regression through linear programming", (2013), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2013/12/16/quantile-regression-through-linear-programming/ .

[AA3] Anton Antonov, "Quantile regression with B-splines", (2014), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2014/01/01/quantile-regression-with-b-splines/ .

[AA4] Anton Antonov, "Estimation of conditional density distributions", (2014), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2014/01/13/estimation-of-conditional-density-distributions/ .

[AA5] Anton Antonov, "Finding local extrema in noisy data using Quantile Regression", (2015), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2015/09/27/finding-local-extrema-in-noisy-data-using-quantile-regression/ .

[AA6] Anton Antonov, "A monad for classification workflows", (2018), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2018/05/15/a-monad-for-classification-workflows/ .

Other

[Wk1] Wikipedia entry, Monad, URL: https://en.wikipedia.org/wiki/Monad_(functional_programming) .

[Wk2] Wikipedia entry, Quantile Regression, URL: https://en.wikipedia.org/wiki/Quantile_regression .

[Wk3] Wikipedia entry, Chebyshev polynomials, URL: https://en.wikipedia.org/wiki/Chebyshev_polynomials .

[CN1] Brian S. Code and Barry R. Noon, "A gentle introduction to quantile regression for ecologists", (2003). Frontiers in Ecology and the Environment. 1 (8): 412[Dash]420. doi:10.2307/3868138. URL: http://www.econ.uiuc.edu/~roger/research/rq/QReco.pdf .

[PS1] Patrick Scheibe, Mathematica (Wolfram Language) support for IntelliJ IDEA, (2013-2018), Mathematica-IntelliJ-Plugin at GitHub. URL: https://github.com/halirutan/Mathematica-IntelliJ-Plugin .

[RG1] Roger Koenker, Quantile Regression, ‪Cambridge University Press, 2005‬.

Advertisements

Phone dialing conversational agent

Introduction

This blog post proclaims the first committed project in the repository ConversationalAgents at GitHub. The project has designs and implementations of a phone calling conversational agent that aims at providing the following functionalities:

  • contacts retrieval (querying, filtering, selection),
  • contacts prioritization, and
  • phone call (work flow) handling.
  • The design is based on a Finite State Machine (FSM) and context free grammar(s) for commands that switch between the states of the FSM. The grammar is designed as a context free grammar rules of a Domain Specific Language (DSL) in Extended Backus-Naur Form (EBNF). (For more details on DSLs design and programming see [1].)

    The (current) implementation is with Wolfram Language (WL) / Mathematica using the functional parsers package [2, 3].

    This movie gives an overview from an end user perspective.

    General design

    The design of the Phone Conversational Agent (PhCA) is derived in a straightforward manner from the typical work flow of calling a contact (using, say, a mobile phone.)

    The main goals for the conversational agent are the following:

    1. contacts retrieval — search, filtering, selection — using both natural language commands and manual interaction,
    2. intuitive integration with the usual work flow of phone calling.

    An additional goal is to facilitate contacts retrieval by determining the most appropriate contacts in query responses. For example, while driving to work by pressing the dial button we might prefer the contacts of an up-coming meeting to be placed on top of the prompting contacts list.

    In this project we assume that the voice to text conversion is done with an external (reliable) component.

    It is assumed that an user of PhCA can react to both visual and spoken query results.

    The main algorithm is the following.

    1) Parse and interpret a natural language command.

    2) If the command is a contacts query that returns a single contact then call that contact.

    3) If the command is a contacts query that returns multiple contacts then :

    3.1) use natural language commands to refine and filter the query results,

    3.2) until a single contact is obtained. Call that single contact.

    4) If other type of command is given act accordingly.

    PhCA has commands for system usage help and for canceling the current contact search and starting over.

    The following FSM diagram gives the basic structure of PhCA:

    "Phone-conversational-agent-FSM-and-DB"

    This movie demonstrates how different natural language commands switch the FSM states.

    Grammar design

    The derived grammar describes sentences that: 1. fit end user expectations, and 2. are used to switch between the FSM states.

    Because of the simplicity of the FSM and the natural language commands only few iterations were done with the Parser-generation-by-grammars work flow.

    The base grammar is given in the file "./Mathematica/PhoneCallingDialogsGrammarRules.m" in EBNF used by [2].

    Here are parsing results of a set of test natural language commands:

    "PhCA-base-grammar-test-queries-125"

    using the WL command:

    ParsingTestTable[ParseJust[pCALLCONTACT\[CirclePlus]pCALLFILTER], ToLowerCase /@ queries]
     

    (Note that according to PhCA’s FSM diagram the parsing of pCALLCONTACT is separated from pCALLFILTER, hence the need to combine the two parsers in the code line above.)

    PhCA’s FSM implementation provides interpretation and context of the functional programming expressions obtained by the parser.

    In the running script "./Mathematica/PhoneDialingAgentRunScript.m" the grammar parsers are modified to do successful parsing using data elements of the provided fake address book.

    The base grammar can be extended with the "Time specifications grammar" in order to include queries based on temporal commands.

    Running

    In order to experiment with the agent just run in Mathematica the command:

    Import["https://raw.githubusercontent.com/antononcube/ConversationalAgents/master/Projects/PhoneDialingDialogsAgent/Mathematica/PhoneDialingAgentRunScript.m"]

    The imported Wolfram Language file, "./Mathematica/PhoneDialingAgentRunScript.m", uses a fake address book based on movie creators metadata. The code structure of "./Mathematica/PhoneDialingAgentRunScript.m" allows easy experimentation and modification of the running steps.

    Here are several screen-shots illustrating a particular usage path (scan left-to-right):

    "PhCA-1-call-someone-from-x-men"" "PhCA-2-a-producer" "PhCA-3-the-third-one

    See this movie demonstrating a PhCA run.

    References

    [1] Anton Antonov, "Creating and programming domain specific languages", (2016), MathematicaForPrediction at WordPress blog.

    [2] Anton Antonov, Functional parsers, Mathematica package, MathematicaForPrediction at GitHub, 2014.

    [3] Anton Antonov, "Natural language processing with functional parsers", (2014), MathematicaForPrediction at WordPress blog.

    Monad code generation and extension

    … in Mathematica / Wolfram Language

    Anton Antonov

    MathematicaForPrediction at GitHub

    MathematicaVsR at GitHub

    June 2017

    Introduction

    This document aims to introduce monadic programming in Mathematica / Wolfram Language (WL) in a concise and code-direct manner. The core of the monad codes discussed is simple, derived from the fundamental principles of Mathematica / WL.

    The usefulness of the monadic programming approach manifests in multiple ways. Here are a few we are interested in:

    1. easy to construct, read, and modify sequences of commands (pipelines),
    2. easy to program polymorphic behaviour,
    3. easy to program context utilization.

    Speaking informally,

    • Monad programming provides an interface that allows interactive, dynamic creation and change of sequentially structured computations with polymorphic and context-aware behavior.

    The theoretical background provided in this document is given in the Wikipedia article on Monadic programming, [Wk1], and the article “The essence of functional programming” by Philip Wadler, [H3]. The code in this document is based on the primary monad definition given in [Wk1,H3]. (Based on the “Kleisli triple” and used in Haskell.)

    The general monad structure can be seen as:

    1. a software design pattern;
    2. a fundamental programming construct (similar to class in object-oriented programming);
    3. an interface for software types to have implementations of.

    In this document we treat the monad structure as a design pattern, [Wk3]. (After reading [H3] point 2 becomes more obvious. A similar in spirit, minimalistic approach to Object-oriented Design Patterns is given in [AA1].)

    We do not deal with types for monads explicitly, we generate code for monads instead. One reason for this is the “monad design pattern” perspective; another one is that in Mathematica / WL the notion of algebraic data type is not needed — pattern matching comes from the core “book of replacement rules” principle.

    The rest of the document is organized as follows.

    1. Fundamental sections The section “What is a monad?” gives the necessary definitions. The section “The basic Maybe monad” shows how to program a monad from scratch in Mathematica / WL. The section “Extensions with polymorphic behavior” shows how extensions of the basic monad functions can be made. (These three sections form a complete read on monadic programming, the rest of the document can be skipped.)

    2. Monadic programming in practice The section “Monad code generation” describes packages for generating monad code. The section “Flow control in monads” describes additional, control flow functionalities. The section “General work-flow of monad code generation utilization” gives a general perspective on the use of monad code generation. The section “Software design with monadic programming” discusses (small scale) software design with monadic programming.

    3. Case study sections The case study sections “Contextual monad classification” and “Tracing monad pipelines” hopefully have interesting and engaging examples of monad code generation, extension, and utilization.

    What is a monad?

    The monad definition

    In this document a monad is any set of a symbol m and two operators unit and bind that adhere to the monad laws. (See the next sub-section.) The definition is taken from [Wk1] and [H3] and phrased in Mathematica / WL terms in this section. In order to be brief, we deliberately do not consider the equivalent monad definition based on unit, join, and map (also given in [H3].)

    Here are operators for a monad associated with a certain symbol M:

    1. monad unit function (“return” in Haskell notation) is Unit[x_] := M[x];
    2. monad bind function (“>>=” in Haskell notation) is a rule like Bind[M[x_], f_] := f[x] with MatchQ[f[x],M[_]] giving True.

    Note that:

    • the function Bind unwraps the content of M[_] and gives it to the function f;
    • the functions fi are responsible to return results wrapped with the monad symbol M.

    Here is an illustration formula showing a monad pipeline:

    Monad-formula-generic

    Monad-formula-generic

    From the definition and formula it should be clear that if for the result of Bind[_M,f[x]] the test MatchQ[f[x],_M] is True then the result is ready to be fed to the next binding operation in monad’s pipeline. Also, it is clear that it is easy to program the pipeline functionality with Fold:

    Fold[Bind, M[x], {f1, f2, f3}]
    
    (* Bind[Bind[Bind[M[x], f1], f2], f3] *)

    The monad laws

    The monad laws definitions are taken from [H1] and [H3].In the monad laws given below the symbol “⟹” is for monad’s binding operation and ↦ is for a function in anonymous form.

    Here is a table with the laws:

    Remark: The monad laws are satisfied for every symbol in Mathematica / WL with List being the unit operation and Apply being the binding operation.

    Expected monadic programming features

    Looking at formula (1) — and having certain programming experiences — we can expect the following features when using monadic programming.

    • Computations that can be expressed with monad pipelines are easy to construct and read.
    • By programming the binding function we can tuck-in a variety of monad behaviours — this is the so called “programmable semicolon” feature of monads.
    • Monad pipelines can be constructed with Fold, but with suitable definitions of infix operators like DoubleLongRightArrow (⟹) we can produce code that resembles the pipeline in formula (1).
    • A monad pipeline can have polymorphic behaviour by overloading the signatures of fi (and if we have to, Bind.)

    These points are clarified below. For more complete discussions see [Wk1] or [H3].

    The basic Maybe monad

    It is fairly easy to program the basic monad Maybe discussed in [Wk1].

    The goal of the Maybe monad is to provide easy exception handling in a sequence of chained computational steps. If one of the computation steps fails then the whole pipeline returns a designated failure symbol, say None otherwise the result after the last step is wrapped in another designated symbol, say Maybe.

    Here is the special version of the generic pipeline formula (1) for the Maybe monad:

    "Monad-formula-maybe"

    “Monad-formula-maybe”

    Here is the minimal code to get a functional Maybe monad (for a more detailed exposition of code and explanations see [AA7]):

    MaybeUnitQ[x_] := MatchQ[x, None] || MatchQ[x, Maybe[___]];
    
    MaybeUnit[None] := None;
    MaybeUnit[x_] := Maybe[x];
    
    MaybeBind[None, f_] := None;
    MaybeBind[Maybe[x_], f_] := 
      Block[{res = f[x]}, If[FreeQ[res, None], res, None]];
    
    MaybeEcho[x_] := Maybe@Echo[x];
    MaybeEchoFunction[f___][x_] := Maybe@EchoFunction[f][x];
    
    MaybeOption[f_][xs_] := 
      Block[{res = f[xs]}, If[FreeQ[res, None], res, Maybe@xs]];

    In order to make the pipeline form of the code we write let us give definitions to a suitable infix operator (like “⟹”) to use MaybeBind:

    DoubleLongRightArrow[x_?MaybeUnitQ, f_] := MaybeBind[x, f];
    DoubleLongRightArrow[x_, y_, z__] := 
      DoubleLongRightArrow[DoubleLongRightArrow[x, y], z];

    Here is an example of a Maybe monad pipeline using the definitions so far:

    data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};
    
    MaybeUnit[data]⟹(* lift data into the monad *)
     (Maybe@ Join[#, RandomInteger[8, 3]] &)⟹(* add more values *)
     MaybeEcho⟹(* display current value *)
     (Maybe @ Map[If[# < 0.4, None, #] &, #] &)(* map values that are too small to None *)
    
    (* {0.61,0.48,0.92,0.9,0.32,0.11,4,4,0}
     None *)

    The result is None because:

    1. the data has a number that is too small, and
    2. the definition of MaybeBind stops the pipeline aggressively using a FreeQ[_,None] test.

    Monad laws verification

    Let us convince ourselves that the current definition of MaybeBind gives a monad.

    The verification is straightforward to program and shows that the implemented Maybe monad adheres to the monad laws.

    "Monad-laws-table-Maybe"

    “Monad-laws-table-Maybe”

    Extensions with polymorphic behavior

    We can see from formulas (1) and (2) that the monad codes can be easily extended through overloading the pipeline functions.

    For example the extension of the Maybe monad to handle of Dataset objects is fairly easy and straightforward.

    Here is the formula of the Maybe monad pipeline extended with Dataset objects:

    Here is an example of a polymorphic function definition for the Maybe monad:

    MaybeFilter[filterFunc_][xs_] := Maybe@Select[xs, filterFunc[#] &];
    
    MaybeFilter[critFunc_][xs_Dataset] := Maybe@xs[Select[critFunc]];

    See [AA7] for more detailed examples of polymorphism in monadic programming with Mathematica / WL.

    A complete discussion can be found in [H3]. (The main message of [H3] is the poly-functional and polymorphic properties of monad implementations.)

    Polymorphic monads in R’s dplyr

    The R package dplyr, [R1], has implementations centered around monadic polymorphic behavior. The command pipelines based on dplyrcan work on R data frames, SQL tables, and Spark data frames without changes.

    Here is a diagram of a typical work-flow with dplyr:

    "dplyr-pipeline"

    The diagram shows how a pipeline made with dplyr can be re-run (or reused) for data stored in different data structures.

    Monad code generation

    We can see monad code definitions like the ones for Maybe as some sort of initial templates for monads that can be extended in specific ways depending on their applications. Mathematica / WL can easily provide code generation for such templates; (see [WL1]). As it was mentioned in the introduction, we do not deal with types for monads explicitly, we generate code for monads instead.

    In this section are given examples with packages that generate monad codes. The case study sections have examples of packages that utilize generated monad codes.

    Maybe monads code generation

    The package [AA2] provides a Maybe code generator that takes as an argument a prefix for the generated functions. (Monad code generation is discussed further in the section “General work-flow of monad code generation utilization”.)

    Here is an example:

    Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MaybeMonadCodeGenerator.m"]
    
    GenerateMaybeMonadCode["AnotherMaybe"]
    
    data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};
    
    AnotherMaybeUnit[data]⟹(* lift data into the monad *)
     (AnotherMaybe@Join[#, RandomInteger[8, 3]] &)⟹(* add more values *)
     AnotherMaybeEcho⟹(* display current value *)
     (AnotherMaybe @ Map[If[# < 0.4, None, #] &, #] &)(* map values that are too small to None *)
    
    (* {0.61,0.48,0.92,0.9,0.32,0.11,8,7,6}
       AnotherMaybeBind: Failure when applying: Function[AnotherMaybe[Map[Function[If[Less[Slot[1], 0.4], None, Slot[1]]], Slot[1]]]]
       None *)

    We see that we get the same result as above (None) and a message prompting failure.

    State monads code generation

    The State monad is also basic and its programming in Mathematica / WL is not that difficult. (See [AA3].)

    Here is the special version of the generic pipeline formula (1) for the State monad:

    "Monad-formula-State"

    “Monad-formula-State”

    Note that since the State monad pipeline caries both a value and a state, it is a good idea to have functions that manipulate them separately. For example, we can have functions for context modification and context retrieval. (These are done in [AA3].)

    This loads the package [AA3]:

    Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/StateMonadCodeGenerator.m"]

    This generates the State monad for the prefix “StMon”:

    GenerateStateMonadCode["StMon"]

    The following StMon pipeline code starts with a random matrix and then replaces numbers in the current pipeline value according to a threshold parameter kept in the context. Several times are invoked functions for context deposit and retrieval.

    SeedRandom[34]
    StMonUnit[RandomReal[{0, 1}, {3, 2}], <|"mark" -> "TooSmall", "threshold" -> 0.5|>]⟹
      StMonEchoValue⟹
      StMonEchoContext⟹
      StMonAddToContext["data"]⟹
      StMonEchoContext⟹
      (StMon[#1 /. (x_ /; x < #2["threshold"] :> #2["mark"]), #2] &)⟹
      StMonEchoValue⟹
      StMonRetrieveFromContext["data"]⟹
      StMonEchoValue⟹
      StMonRetrieveFromContext["mark"]⟹
      StMonEchoValue;
    
    (* value: {{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}
       context: <|mark->TooSmall,threshold->0.5|>
       context: <|mark->TooSmall,threshold->0.5,data->{{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}|>
       value: {{0.789884,0.831468},{TooSmall,0.50537},{TooSmall,TooSmall}}
       value: {{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}
       value: TooSmall *)

    Flow control in monads

    We can implement dedicated functions for governing the pipeline flow in a monad.

    Let us look at a breakdown of these kind of functions using the State monad StMon generated above.

    Optional acceptance of a function result

    A basic and simple pipeline control function is for optional acceptance of result — if failure is obtained applying f then we ignore its result (and keep the current pipeline value.)

    Here is an example with StMonOption :

    SeedRandom[34]
    StMonUnit[RandomReal[{0, 1}, 5]]⟹
     StMonEchoValue⟹
     StMonOption[If[# < 0.3, None] & /@ # &]⟹
     StMonEchoValue
    
    (* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
       value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
       StMon[{0.789884, 0.831468, 0.421298, 0.50537, 0.0375957}, <||>] *)

    Without StMonOption we get failure:

    SeedRandom[34]
    StMonUnit[RandomReal[{0, 1}, 5]]⟹
     StMonEchoValue⟹
     (If[# < 0.3, None] & /@ # &)⟹
     StMonEchoValue
    
    (* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
       StMonBind: Failure when applying: Function[Map[Function[If[Less[Slot[1], 0.3], None]], Slot[1]]]
       None *)

    Conditional execution of functions

    It is natural to want to have the ability to chose a pipeline function application based on a condition.

    This can be done with the functions StMonIfElse and StMonWhen.

    SeedRandom[34]
    StMonUnit[RandomReal[{0, 1}, 5]]⟹
     StMonEchoValue⟹
     StMonIfElse[
      Or @@ (# < 0.4 & /@ #) &,
      (Echo["A too small value is present.", "warning:"]; 
        StMon[Style[#1, Red], #2]) &,
      StMon[Style[#1, Blue], #2] &]⟹
     StMonEchoValue
    
     (* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
        warning: A too small value is present.
        value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
        StMon[{0.789884,0.831468,0.421298,0.50537,0.0375957},<||>] *)

    Remark: Using flow control functions like StMonIfElse and StMonWhen with appropriate messages is a better way of handling computations that might fail. The silent failures handling of the basic Maybe monad is convenient only in a small number of use cases.

    Iterative functions

    The last group of pipeline flow control functions we consider comprises iterative functions that provide the functionalities of Nest, NestWhile, FoldList, etc.

    In [AA3] these functionalities are provided through the function StMonIterate.

    Here is a basic example using Nest that corresponds to Nest[#+1&,1,3]:

    StMonUnit[1]⟹StMonIterate[Nest, (StMon[#1 + 1, #2]) &, 3]
    
    (* StMon[4, <||>] *)

    Consider this command that uses the full signature of NestWhileList:

    NestWhileList[# + 1 &, 1, # < 10 &, 1, 4]
    
    (* {1, 2, 3, 4, 5} *)

    Here is the corresponding StMon iteration code:

    StMonUnit[1]⟹StMonIterate[NestWhileList, (StMon[#1 + 1, #2]) &, (#[[1]] < 10) &, 1, 4]
    
    (* StMon[{1, 2, 3, 4, 5}, <||>] *)

    Here is another results accumulation example with FixedPointList :

    StMonUnit[1.]⟹
     StMonIterate[FixedPointList, (StMon[(#1 + 2/#1)/2, #2]) &]
    
    (* StMon[{1., 1.5, 1.41667, 1.41422, 1.41421, 1.41421, 1.41421}, <||>] *)

    When the functions NestList, NestWhileList, FixedPointList are used with StMonIterate their results can be stored in the context. Here is an example:

    StMonUnit[1.]⟹
     StMonIterate[FixedPointList, (StMon[(#1 + 2/#1)/2, #2]) &, "fpData"]
    
    (* StMon[{1., 1.5, 1.41667, 1.41422, 1.41421, 1.41421, 1.41421}, <|"fpData" -> {StMon[1., <||>], 
        StMon[1.5, <||>], StMon[1.41667, <||>], StMon[1.41422, <||>], StMon[1.41421, <||>], 
        StMon[1.41421, <||>], StMon[1.41421, <||>]} |>] *)

    More elaborate tests can be found in [AA8].

    Partial pipelines

    Because of the associativity law we can design pipeline flows based on functions made of “sub-pipelines.”

    fEcho = Function[{x, ct}, StMonUnit[x, ct]⟹StMonEchoValue⟹StMonEchoContext];
    
    fDIter = Function[{x, ct}, 
       StMonUnit[y^x, ct]⟹StMonIterate[FixedPointList, StMonUnit@D[#, y] &, 20]];
    
    StMonUnit[7]⟹fEcho⟹fDIter⟹fEcho;
    
    (*
      value: 7
      context: <||>
      value: {y^7,7 y^6,42 y^5,210 y^4,840 y^3,2520 y^2,5040 y,5040,0,0}
      context: <||> *)

    General work-flow of monad code generation utilization

    With the abilities to generate and utilize monad codes it is natural to consider the following work flow. (Also shown in the diagram below.)

    1. Come up with an idea that can be expressed with monadic programming.
    2. Look for suitable monad implementation.
    3. If there is no such implementation, make one (or two, or five.)
    4. Having a suitable monad implementation, generate the monad code.
    5. Implement additional pipeline functions addressing envisioned use cases.
    6. Start making pipelines for the problem domain of interest.
    7. Are the pipelines are satisfactory? If not go to 5. (Or 2.)

    "make-monads"

    Monad templates

    The template nature of the general monads can be exemplified with the group of functions in the package StateMonadCodeGenerator.m, [4].

    They are in five groups:

    1. base monad functions (unit testing, binding),
    2. display of the value and context,
    3. context manipulation (deposit, retrieval, modification),
    4. flow governing (optional new value, conditional function application, iteration),
    5. other convenience functions.

    We can say that all monad implementations will have their own versions of these groups of functions. The more specialized monads will have functions specific to their intended use. Such special monads are discussed in the case study sections.

    Software design with monadic programming

    The application of monadic programming to a particular problem domain is very similar to designing a software framework or designing and implementing a Domain Specific Language (DSL).

    The answers of the question “When to use monadic programming?” can form a large list. This section provides only a couple of general, personal viewpoints on monadic programming in software design and architecture. The principles of monadic programming can be used to build systems from scratch (like Haskell and Scala.) Here we discuss making specialized software with or within already existing systems.

    Framework design

    Software framework design is about architectural solutions that capture the commonality and variability in a problem domain in such a way that: 1) significant speed-up can be achieved when making new applications, and 2) a set of policies can be imposed on the new applications.

    The rigidness of the framework provides and supports its flexibility — the framework has a backbone of rigid parts and a set of “hot spots” where new functionalities are plugged-in.

    Usually Object-Oriented Programming (OOP) frameworks provide inversion of control — the general work-flow is already established, only parts of it are changed. (This is characterized with “leave the driving to us” and “don’t call us we will call you.”)

    The point of utilizing monadic programming is to be able to easily create different new work-flows that share certain features. (The end user is the driver, on certain rail paths.)

    In my opinion making a software framework of small to moderate size with monadic programming principles would produce a library of functions each with polymorphic behaviour that can be easily sequenced in monadic pipelines. This can be contrasted with OOP framework design in which we are more likely to end up with backbone structures that (i) are static and tree-like, and (ii) are extended or specialized by plugging-in relevant objects. (Those plugged-in objects themselves can be trees, but hopefully short ones.)

    DSL development

    Given a problem domain the general monad structure can be used to shape and guide the development of DSLs for that problem domain.

    Generally, in order to make a DSL we have to choose the language syntax and grammar. Using monadic programming the syntax and grammar commands are clear. (The monad pipelines are the commands.) What is left is “just” the choice of particular functions and their implementations.

    Another way to develop such a DSL is through a grammar of natural language commands. Generally speaking, just designing the grammar — without developing the corresponding interpreters — would be very helpful in figuring out the components at play. Monadic programming meshes very well with this approach and applying the two approaches together can be very fruitful.

    Contextual monad classification (case study)

    In this section we show an extension of the State monad into a monad aimed at machine learning classification work-flows.

    Motivation

    We want to provide a DSL for doing machine learning classification tasks that allows us:

    1. to do basic summarization and visualization of the data,
    2. to control splitting of the data into training and testing sets;
    3. to apply the built-in classifiers;
    4. to apply classifier ensembles (see [AA9] and [AA10]);
    5. to evaluate classifier performances with standard measures and
    6. ROC plots.

    Also, we want the DSL design to provide clear directions how to add (hook-up or plug-in) new functionalities.

    The package [AA4] discussed below provides such a DSL through monadic programming.

    Package and data loading

    This loads the package [AA4]:

    Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicContextualClassification.m"]

    This gets some test data (the Titanic dataset):

    dataName = "Titanic";
    ds = Dataset[Flatten@*List @@@ ExampleData[{"MachineLearning", dataName}, "Data"]];
    varNames = Flatten[List @@ ExampleData[{"MachineLearning", dataName}, "VariableDescriptions"]];
    varNames = StringReplace[varNames, "passenger" ~~ (WhitespaceCharacter ..) -> ""];
    If[dataName == "FisherIris", varNames = Most[varNames]];
    ds = ds[All, AssociationThread[varNames -> #] &];

    Monad design

    The package [AA4] provides functions for the monad ClCon — the functions implemented in [AA4] have the prefix “ClCon”.

    The classifier contexts are Association objects. The pipeline values can have the form:

    ClCon[ val, context:(_String|_Association) ]

    The ClCon specific monad functions deposit or retrieve values from the context with the keys: “trainData”, “testData”, “classifier”. The general idea is that if the current value of the pipeline cannot provide all arguments for a ClCon function, then the needed arguments are taken from the context. If that fails, then an message is issued. This is illustrated with the following pipeline with comments example.

    "ClCon-basic-example"

    The pipeline and results above demonstrate polymorphic behaviour over the classifier variable in the context: different functions are used if that variable is a ClassifierFunction object or an association of named ClassifierFunction objects.

    Note the demonstrated granularity and sequentiality of the operations coming from using a monad structure. With those kind of operations it would be easy to make interpreters for natural language DSLs.

    Another usage example

    This monadic pipeline in this example goes through several stages: data summary, classifier training, evaluation, acceptance test, and if the results are rejected a new classifier is made with a different algorithm using the same data splitting. The context keeps track of the data and its splitting. That allows the conditional classifier switch to be concisely specified.

    First let us define a function that takes a Classify method as an argument and makes a classifier and calculates performance measures.

    ClSubPipe[method_String] :=
      Function[{x, ct},
       ClConUnit[x, ct]⟹
        ClConMakeClassifier[method]⟹
        ClConEchoFunctionContext["classifier:", 
         ClassifierInformation[#["classifier"], Method] &]⟹
        ClConEchoFunctionContext["training time:", ClassifierInformation[#["classifier"], "TrainingTime"] &]⟹
        ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall"}]⟹
        ClConEchoValue⟹
        ClConEchoFunctionContext[
         ClassifierMeasurements[#["classifier"], 
         ClConToNormalClassifierData[#["testData"]], "ROCCurve"] &]
       ];

    Using the sub-pipeline function ClSubPipe we make the outlined pipeline.

    SeedRandom[12]
    res =
      ClConUnit[ds]⟹
       ClConSplitData[0.7]⟹
       ClConEchoFunctionValue["summaries:", ColumnForm[Normal[RecordsSummary /@ #]] &]⟹
       ClConEchoFunctionValue["xtabs:", 
        MatrixForm[CrossTensorate[Count == varNames[[1]] + varNames[[-1]], #]] & /@ # &]⟹
       ClSubPipe["LogisticRegression"]⟹
       (If[#1["Accuracy"] > 0.8,
          Echo["Good accuracy!", "Success:"]; ClConFail,
          Echo["Make a new classifier", "Inaccurate:"]; 
          ClConUnit[#1, #2]] &)⟹
       ClSubPipe["RandomForest"];

    "ClCon-pipeline-2-output"

    Tracing monad pipelines (case study)

    The monadic implementations in the package MonadicTracing.m, [AA5] allow tracking of the pipeline execution of functions within other monads.

    The primary reason for developing the package was the desire to have the ability to print a tabulated trace of code and comments using the usual monad pipeline notation. (I.e. without conversion to strings etc.)

    It turned out that by programming MonadicTracing.m I came up with a monad transformer; see [Wk2], [H2].

    Package loading

    This loads the package [AA5]:

    Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

    Usage example

    This generates a Maybe monad to be used in the example (for the prefix “Perhaps”):

    GenerateMaybeMonadCode["Perhaps"]
    GenerateMaybeMonadSpecialCode["Perhaps"]

    In following example we can see that pipeline functions of the Perhaps monad are interleaved with comment strings. Producing the grid of functions and comments happens “naturally” with the monad function TraceMonadEchoGrid.

    data = RandomInteger[10, 15];
    
    TraceMonadUnit[PerhapsUnit[data]]⟹"lift to monad"⟹
      TraceMonadEchoContext⟹
      PerhapsFilter[# > 3 &]⟹"filter current value"⟹
      PerhapsEcho⟹"display current value"⟹
      PerhapsWhen[#[[3]] > 3 &, 
       PerhapsEchoFunction[Style[#, Red] &]]⟹
      (Perhaps[#/4] &)⟹
      PerhapsEcho⟹"display current value again"⟹
      TraceMonadEchoGrid[Grid[#, Alignment -> Left] &];

    Note that :

    1. the tracing is initiated by just using TraceMonadUnit;
    2. pipeline functions (actual code) and comments are interleaved;
    3. putting a comment string after a pipeline function is optional.

    Another example is the ClCon pipeline in the sub-section “Monad design” in the previous section.

    Summary

    This document presents a style of using monadic programming in Wolfram Language (Mathematica). The style has some shortcomings, but it definitely provides convenient features for day-to-day programming and in coming up with architectural designs.

    The style is based on WL’s basic language features. As a consequence it is fairly concise and produces light overhead.

    Ideally, the packages for the code generation of the basic Maybe and State monads would serve as starting points for other more general or more specialized monadic programs.

    References

    Monadic programming

    [Wk1] Wikipedia entry: Monad (functional programming), URL: https://en.wikipedia.org/wiki/Monad_(functional_programming) .

    [Wk2] Wikipedia entry: Monad transformer, URL: https://en.wikipedia.org/wiki/Monad_transformer .

    [Wk3] Wikipedia entry: Software Design Pattern, URL: https://en.wikipedia.org/wiki/Software_design_pattern .

    [H1] Haskell.org article: Monad laws, URL: https://wiki.haskell.org/Monad_laws.

    [H2] Sheng Liang, Paul Hudak, Mark Jones, “Monad transformers and modular interpreters”, (1995), Proceedings of the 22nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages. New York, NY: ACM. pp. 333[Dash]343. doi:10.1145/199448.199528.

    [H3] Philip Wadler, “The essence of functional programming”, (1992), 19’th Annual Symposium on Principles of Programming Languages, Albuquerque, New Mexico, January 1992.

    R

    [R1] Hadley Wickham et al., dplyr: A Grammar of Data Manipulation, (2014), tidyverse at GitHub, URL: https://github.com/tidyverse/dplyr . (See also, http://dplyr.tidyverse.org .)

    Mathematica / Wolfram Language

    [WL1] Leonid Shifrin, “Metaprogramming in Wolfram Language”, (2012), Mathematica StackExchange. (Also posted at Wolfram Community in 2017.) URL of the Mathematica StackExchange answer: https://mathematica.stackexchange.com/a/2352/34008 . URL of the Wolfram Community post: http://community.wolfram.com/groups/-/m/t/1121273 .

    MathematicaForPrediction

    [AA1] Anton Antonov, “Implementation of Object-Oriented Programming Design Patterns in Mathematica”, (2016) MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction.

    [AA2] Anton Antonov, Maybe monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MaybeMonadCodeGenerator.m .

    [AA3] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

    [AA4] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

    [AA5] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

    [AA6] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

    [AA7] Anton Antonov, Simple monadic programming, (2017), MathematicaForPrediction at GitHub. (Preliminary version, 40% done.) URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/Documentation/Simple-monadic-programming.pdf .

    [AA8] Anton Antonov, Generated State Monad Mathematica unit tests, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/GeneratedStateMonadTests.m .

    [AA9] Anton Antonov, Classifier ensembles functions Mathematica package, (2016), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m .

    [AA10] Anton Antonov, “ROC for classifier ensembles, bootstrapping, damaging, and interpolation”, (2016), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/ .

    Creating and programming domain specific languages

    Introduction

    In this blog post I will provide links to documents, packages, blog posts, and discussions for creating and utilizing Domain Specific Languages (DSLs). I have discussed a few DSLs in previous blog posts (linked below). This blog post provides a more general, higher level view on the application and creation of DSLs. The concrete examples are with Mathematica, but the steps are general and can be done with any programming languages and tools.

    When to apply DSLs

    Here are some situations for applying DSLs.

    1. When designing conversational engines.
    2.  When there are too many usage scenarios and tuning options for the developed algorithms.
      • For example, we have a bunch of search, recommendation, and interaction algorithms for a dating site. A different, User Experience Department (UED) designs interactive user interfaces for these algorithms. We make a natural language DSL that invokes the different algorithms according to specified outcomes. With the DSL the different designs produced by UED are much easily prototyped, implemented, or fleshed out. The DSL also gives to UED easier to understand view on the functionalities provided by the algorithms.
    3. When designing an API for a collection of algorithms.
      • Just designing a DSL can bring clarity of what signatures should be in the API.
      • NIntegrate‘s Method option was designed and implemented using a DSL. See this video between 25:00 and 27:30.

    Designing DSLs

    1. Decide what kind of sentences the DSL is going to have.
      • Are natural language sentences going to be used?
      • Are the language words known beforehand or not?
    2. Prepare, create, or accumulate a list of representative sentences.
      • In some cases using Morphological Analysis can greatly help for coming up with use cases and the corresponding sentences.
    3. Create a context free grammar that describes the sentences from the previous step. (Or a large subset of them.)
      • At this stage I use exclusively Extended Backus-Naur Form (EBNF).
      • In some cases the grammar terminals are not know at the design stage and have to retrieved in some way. (From a database or though natural language processing.)
      • Some conversational engine systems allow or require to the grammar specification to be done in XML. I would still do BNF and then move to XML
        •  It is not that hard to write a parser-and-interpreter that translates BNF into XML. See the end of this blog post for that kind of translation of BNF into OMPL.
    4. Program parser(s) for the grammar.
      • I use most of the time functional parsers.
      • The package FunctionalParsers.m provides a Mathematica implementation of this kind of parsing.
      • The package can automatically generate parsers from a grammar given in EBNF. (See the coding example below.)
      • I have programmed versions of this package in R and Lua.
    5. Program an interpreter for the parsed sentences.
      • At this stage the parsed sentences are hooked to the algorithms of the problem domain.
      • The package FunctionalParsers.m allows this to be done fairly easy.
    6. Test the parsing and interpretation.

    See the code example below illustrating steps 3-6.

    Introduction to using DSLs in Mathematica

    1. This blog post “Natural language processing with functional parsers” gives an introduction to the DSL application in Mathematica.
    2. This detailed slide-show presentation “Functional parsers for an integration requests language grammar” shows how to use the package FunctionalParsers.m over a small grammar.
    3. The answer of the MSE question “How to parse a clojure expression?” gives a good introduction with a simple grammar and shows both direct parser programming and automatic generation from EBNF.

    Advanced example

    The blog post “Simple time series conversational engine” discusses the creation (design and programming) of a simple conversational engine for time series analysis (data loading, finding outliers and trends.)

    Here is a movie demonstrating that conversational engine: http://youtu.be/wlZ5ANglVI4.

    Other discussions

    1. A small part, from 17:30 to 21:00, of the WTC 2012 “Spatial Access Methods and Route Finding” presentation shows a DSL for points of interest queries.
    2. The answer of the MSE question “CSS Selectors for Symbolic XML” uses FunctionalParsers.m .
    3. This Quantile Regression presentation is aided by the  “Simple time series conversational engine” mentioned above.

    Coding example

    This coding example demonstrates steps 3-6 discussed above.

    EBNF-and-parsers-for-LoveFood

    Interpreters-and-parsing-for-LoveFood