Parametrized event records data transformations

Introduction

In this document we describe transformations of events records data in order to make that data more amenable for the application of Machine Learning (ML) algorithms.

Consider the following problem formulation (done with the next five bullet points.)

  • From data representing a (most likely very) diverse set of events we want to derive contingency matrices corresponding to each of the variables in that data.

  • The events are observations of the values of a certain set of variables for a certain set of entities. Not all entities have events for all variables.

  • The observation times do not form a regular time grid.

  • Each contingency matrix has rows corresponding to the entities in the data and has columns corresponding to time.

  • The software component providing the functionality should allow parametrization and repeated execution. (As in ML classifier training and testing scenarios.)

The phrase “event records data” is used instead of “time series” in order to emphasize that (i) some variables have categorical values, and (ii) the data can be given in some general database form, like transactions long-form.

The required transformations of the event records in the problem formulation above are done through the monad ERTMon, [AAp3]. (The name “ERTMon” comes from “Event Records Transformations Monad”.)

The monad code generation and utilization is explained in [AA1] and implemented with [AAp1].

It is assumed that the event records data is put in a form that makes it (relatively) easy to extract time series for the set of entity-variable pairs present in that data.

In brief ERTMon performs the following sequence of transformations.

  1. The event records of each entity-variable pair are shifted to adhere to a specified start or end point,

  2. The event records for each entity-variable pair are aggregated and normalized with specified functions over a specified regular grid,

  3. Entity vs. time interval contingency matrices are made for each combination of variable and aggregation function.

The transformations are specified with a “computation specification” dataset.

Here is an example of an ERTMon pipeline over event records:

ERTMon-small-pipeline-example
ERTMon-small-pipeline-example

The rest of the document describes in detail:

  • the structure, format, and interpretation of the event records data and computations specifications,

  • the transformations of time series aligning, aggregation, and normalization,

  • the software pattern design – a monad – that allows sequential specifications of desired transformations.

Concrete examples are given using weather data. See [AAp9].

Package load

The following commands load the packages [AAp1-AAp9].

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicEventRecordsTransformations.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/Misc/WeatherEventRecords.m"]

Data load

The data we use is weather data from meteorological stations close to certain major cities. We retrieve the data with the function WeatherEventRecords from the package [AAp9].

?WeatherEventRecords

WeatherEventRecords[ citiesSpec_: {{_String, _String}..}, dateRange:{{_Integer, _Integer, _Integer}, {_Integer, _Integer, _Integer}}, wProps:{_String..} : {“Temperature”}, nStations_Integer : 1 ] gives an association with event records data.

citiesSpec = {{"Miami", "USA"}, {"Chicago", "USA"}, {"London",  "UK"}};
dateRange = {{2017, 7, 1}, {2018, 6, 31}};
wProps = {"Temperature", "MaxTemperature", "Pressure", "Humidity", "WindSpeed"};
res1 = WeatherEventRecords[citiesSpec, dateRange, wProps, 1];

citiesSpec = {{"Jacksonville", "USA"}, {"Peoria", "USA"}, {"Melbourne", "Australia"}};
dateRange = {{2016, 12, 1}, {2017, 12, 31}};
res2 = WeatherEventRecords[citiesSpec, dateRange, wProps, 1];

Here we assign the obtained datasets to variables we use below:

eventRecords = Join[res1["eventRecords"], res2["eventRecords"]];
entityAttributes = Join[res1["entityAttributes"], res2["entityAttributes"]];

Here are the summaries of the datasets eventRecords and entityAttributes:

RecordsSummary[eventRecords]
ERTMon-RecordsSummary-eventRecord
ERTMon-RecordsSummary-eventRecord
RecordsSummary[entityAttributes]
ERTMon-RecordsSummary-entityAttributes
ERTMon-RecordsSummary-entityAttributes

Design considerations

Workflow

The steps of the main event records transformations workflow addressed in this document follow.

  1. Ingest event records and entity attributes given in the Star schema style.

  2. Ingest a computation specification.

    1. Specified are aggregation time intervals, aggregation functions, normalization types and functions.
  3. Group event records based on unique entity ID and variable pairs.
    1. Additional filtering can be applied using the entity attributes.
  4. For each variable find descriptive statistics properties.
    1. This is to facilitate normalization procedures.

    2. Optionally, for each variable find outlier boundaries.

  5. Align each group of records to start or finish at some specified point.

    1. For each variable we want to impose a regular time grid.
  6. From each group of records produce a time series.

  7. For each time series do prescribed aggregation and normalization.

    1. The variable that corresponds to each group of records has at least one (possibly several) computation specifications.
  8. Make a contingency matrix for each time series obtained in the previous step.
    1. The contingency matrices have entity ID’s as rows, and time intervals enumerating values of time intervals.

The following flow-chart corresponds to the list of steps above.

ERTMon-workflows
ERTMon-workflows

A corresponding monadic pipeline is given in the section “Larger example pipeline”.

Feature engineering perspective

The workflow above describes a way to do feature engineering over a collection of event records data. For a given entity ID and a variable we derive several different time series.

Couple of examples follow.

  • One possible derived feature (times series) is for each entity-variable pair we make time series of the hourly mean value in each of the eight most recent hours for that entity. The mean values are normalized by the average values of the records corresponding to that entity-variable pair.

  • Another possible derived feature (time series) is for each entity-variable pair to make a time series with the number of outliers in the each half-hour interval, considering the most recent 20 half-hour intervals. The outliers are found by using outlier boundaries derived by analyzing all values of the corresponding variable, across all entities.

From the examples above – and some others – we conclude that for each feature we want to be able to specify:

  • maximum history length (say from the most recent observation),

  • aggregation interval length,

  • aggregation function (to be applied in each interval),

  • normalization function (per entity, per cohort of entities, per variable),

  • conversion of categorical values into numerical ones.

Repeated execution

We want to be able to do repeated executions of the specified workflow steps.

Consider the following scenario. After the event records data is converted to a entity-vs-feature contingency matrix, we use that matrix to train and test a classifier. We want to find the combination of features that gives the best classifier results. For that reason we want to be able to easily and systematically change the computation specifications (interval size, aggregation and normalization functions, etc.) With different computation specifications we obtain different entity-vs-feature contingency matrices, that would have different performance with different classifiers.

Using the classifier training and testing scenario we see that there is another repeated execution perspective: after the feature engineering is done over the training data, we want to be able to execute exactly the same steps over the test data. Note that with the training data we find certain global or cohort normalization values and outlier boundaries that have to be used over the test data. (Not derived from the test data.)

The following diagram further describes the repeated execution workflow.

ERTMon-repeated-execution-workflow
ERTMon-repeated-execution-workflow

Further discussion of making and using ML classification workflows through the monad software design pattern can be found in [AA2].

Event records data design

The data is structured to follow the style of Star schema. We have event records dataset (table) and entity attributes dataset (table).

The structure datasets (tables) proposed satisfy a wide range of modeling data requirements. (Medical and financial modeling included.)

Entity event data

The entity event data has the columns “EntityID”, “LocationID”, “ObservationTime”, “Variable”, “Value”.

RandomSample[eventRecords, 6]
ERTMon-eventRecords-sample
ERTMon-eventRecords-sample

Most events can be described through “Entity event data”. The entities can be anything that produces a set of event data: financial transactions, vital sign monitors, wind speed sensors, chemical concentrations sensors.

The locations can be anything that gives the events certain “spatial” attributes: medical units in hospitals, sensors geo-locations, tiers of financial transactions.

Entity attributes data

The entity attributes dataset (table) has attributes (immutable properties) of the entities. (Like, gender and race for people, longitude and latitude for wind speed sensors.)

entityAttributes[[1 ;; 6]]
ERTMon-entityAttributes-sample
ERTMon-entityAttributes-sample

Example

For example, here we take all weather stations in USA:

ws = Normal[entityAttributes[Select[#Attribute == "Country" && #Value == "USA" &], "EntityID"]]

(* {"KMFL", "KMDW", "KNIP", "KGEU"} *)

Here we take all temperature event records for those weather stations:

srecs = eventRecords[Select[#Variable == "Temperature" && MemberQ[ws, #EntityID] &]];

And here plot the corresponding time series obtained by grouping the records by station (entity ID’s) and taking the columns “ObservationTime” and “Value”:

grecs = Normal @ GroupBy[srecs, #EntityID &][All, All, {"ObservationTime", "Value"}];
DateListPlot[grecs, ImageSize -> Large, PlotTheme -> "Detailed"]
ERTMon-DateListPlot-USA-Temperature
ERTMon-DateListPlot-USA-Temperature

Monad elements

This section goes through the steps of the general ERTMon workflow. For didactic purposes each sub-section changes the pipeline assigned to the variable p. Of course all functions can be chained into one big pipeline as shown in the section “Larger example pipeline”.

Monad unit

The monad is initialized with ERTMonUnit.

ERTMonUnit[]

(* ERTMon[None, <||>] *)

Ingesting event records and entity attributes

The event records dataset (table) and entity attributes dataset (table) are set with corresponding setter functions. Alternatively, they can be read from files in a specified directory.

p =
  ERTMonUnit[]⟹
   ERTMonSetEventRecords[eventRecords]⟹
   ERTMonSetEntityAttributes[entityAttributes]⟹
   ERTMonEchoDataSummary;
   
ERTMon-echo-data-summary
ERTMon-echo-data-summary

Computation specification

Using the package [AAp3] we can create computation specification dataset. Below is given an example of constructing a fairly complicated computation specification.

The package function EmptyComputationSpecificationRow can be used to construct the rows of the specification.

EmptyComputationSpecificationRow[]

(* <|"Variable" -> Missing[], "Explanation" -> "", 
    "MaxHistoryLength" -> 3600, "AggregationIntervalLength" -> 60, 
    "AggregationFunction" -> "Mean", "NormalizationScope" -> "Entity", 
    "NormalizationFunction" -> "None"|> *)
    
    
compSpecRows = 
  Join[EmptyComputationSpecificationRow[], <|"Variable" -> #, 
      "MaxHistoryLength" -> 60*24*3600, 
      "AggregationIntervalLength" -> 2*24*3600, 
      "AggregationFunction" -> "Mean", 
      "NormalizationScope" -> "Entity", 
      "NormalizationFunction" -> "Mean"|>] & /@ 
   Union[Normal[eventRecords[All, "Variable"]]];
compSpecRows =
  Join[
   compSpecRows, 
   Join[EmptyComputationSpecificationRow[], <|"Variable" -> #, 
       "MaxHistoryLength" -> 60*24*3600, 
       "AggregationIntervalLength" -> 2*24*3600, 
       "AggregationFunction" -> "Range", 
       "NormalizationScope" -> "Country", 
       "NormalizationFunction" -> "Mean"|>] & /@ 
    Union[Normal[eventRecords[All, "Variable"]]],
   Join[EmptyComputationSpecificationRow[], <|"Variable" -> #, 
       "MaxHistoryLength" -> 60*24*3600, 
       "AggregationIntervalLength" -> 2*24*3600, 
       "AggregationFunction" -> "OutliersCount", 
       "NormalizationScope" -> "Variable"|>] & /@ 
    Union[Normal[eventRecords[All, "Variable"]]]
   ];

The constructed rows are assembled into a dataset (with Dataset). The function ProcessComputationSpecification is used to convert a user-made specification dataset into a form used by ERTMon.

wCompSpec = 
 ProcessComputationSpecification[Dataset[compSpecRows]][SortBy[#Variable &]]
 
ERTMon-wCompSpec
ERTMon-wCompSpec

The computation specification is set to the monad with the function ERTMonSetComputationSpecification.

Alternatively, a computation specification can be created and filled-in as a CSV file and read into the monad. (Not described here.)

Grouping event records by entity-variable pairs

With the function ERTMonGroupEntityVariableRecords we group the event records by the found unique entity-variable pairs. Note that in the pipeline below we set the computation specification first.

p =
  p⟹
   ERTMonSetComputationSpecification[wCompSpec]⟹
   ERTMonGroupEntityVariableRecords;

Descriptive statistics (per variable)

After the data is ingested into the monad and the event records are grouped per entity-variable pairs we can find certain descriptive statistics for the data. This is done with the general function ERTMonComputeVariableStatistic and the specialized function ERTMonFindVariableOutlierBoundaries.

p⟹ERTMonComputeVariableStatistic[RecordsSummary]⟹ERTMonEchoValue;
ERTMon-compute-variable-statistic-1
ERTMon-compute-variable-statistic-1
p⟹ERTMonComputeVariableStatistic⟹ERTMonEchoValue;
ERTMon-compute-variable-statistic-2
ERTMon-compute-variable-statistic-2
p⟹ERTMonComputeVariableStatistic[TakeLargest[#, 3] &]⟹ERTMonEchoValue;

(* value: <|Humidity->{1.,1.,0.993}, MaxTemperature->{48,48,48},
            Pressure->{1043.1,1042.8,1041.1}, Temperature->{42.28,41.94,41.89},
            WindSpeed->{54.82,44.63,44.08}|> *)
            
            

Finding the variables outlier boundaries

The finding of outliers counts and fractions can be specified in the computation specification. Because of this there is a specialized function for outlier finding ERTMonFindVariableOutlierBoundaries. That function makes the association of the found variable outlier boundaries (i) to be the pipeline value and (ii) to be the value of context key “variableOutlierBoundaries”. The outlier boundaries are found using the functions of the package [AAp6].

If no argument is specified ERTMonFindVariableOutlierBoundaries uses the Hampel identifier (HampelIdentifierParameters).

p⟹ERTMonFindVariableOutlierBoundaries⟹ERTMonEchoValue;

(* value: <|Humidity->{0.522536,0.869464}, MaxTemperature->{14.2106,31.3494},
            Pressure->{1012.36,1022.44}, Temperature->{9.88823,28.3318},
            WindSpeed->{5.96141,19.4086}|> *)
            
Keys[p⟹ERTMonFindVariableOutlierBoundaries⟹ERTMonTakeContext]

(* {"eventRecords", "entityAttributes", "computationSpecification",
    "entityVariableRecordGroups", "variableOutlierBoundaries"} *)
   

In the rest of document we use the outlier boundaries found with the more conservative identifier SPLUSQuartileIdentifierParameters.

p =
  p⟹
   ERTMonFindVariableOutlierBoundaries[SPLUSQuartileIdentifierParameters]⟹
   ERTMonEchoValue;

(* value: <|Humidity->{0.176,1.168}, MaxTemperature->{-1.67,45.45},
            Pressure->{1003.75,1031.35}, Temperature->{-5.805,43.755},
            WindSpeed->{-5.005,30.555}|> *)

Conversion of event records to time series

The grouped event records are converted into time series with the function ERTMonEntityVariableGroupsToTimeSeries. The time series are aligned to a time point specification given as an argument. The argument can be: a date object, “MinTime”, “MaxTime”, or “None”. (“MaxTime” is the default.)

p⟹
  ERTMonEntityVariableGroupsToTimeSeries["MinTime"]⟹
  ERTMonEchoFunctionContext[#timeSeries[[{1, 3, 5}]] &];
  
ERTMon-records-groups-minTime
ERTMon-records-groups-minTime

Compare the last output with the output of the following command.

p =
  p⟹
   ERTMonEntityVariableGroupsToTimeSeries["MaxTime"]⟹
   ERTMonEchoFunctionContext[#timeSeries[[{1, 3, 5}]] &];
ERTMon-records-groups-maxTime
ERTMon-records-groups-maxTime

Time series restriction and aggregation.

The main goal of ERTMon is to convert a diverse, general collection of event records into a collection of aligned time series over specified regular time grids.

The regular time grids are specified with the columns “MaxHistoryLength” and “AggregationIntervalLength” of the computation specification. The time series of the variables in the computation specification are restricted to the corresponding maximum history lengths and are aggregated using the corresponding aggregation lengths and functions.

p =
  p⟹
   ERTMonAggregateTimeSeries⟹
   ERTMonEchoFunctionContext[DateListPlot /@ #timeSeries[[{1, 3, 5}]] &];
   
ERTMon-restriction-and-aggregation
ERTMon-restriction-and-aggregation

Application of time series functions

At this point we can apply time series modifying functions. An often used such function is moving average.

p⟹
  ERTMonApplyTimeSeriesFunction[MovingAverage[#, 6] &]⟹
  ERTMonEchoFunctionValue[DateListPlot /@ #[[{1, 3, 5}]] &];
ERTMon-moving-average-application
ERTMon-moving-average-application

Note that the result is given as a pipeline value, the value of the context key “timeSeries” is not changed.

(In the future, the computation specification and its handling might be extended to handle moving average or other time series function specifications.)

Normalization

With “normalization” we mean that the values of a given time series values are divided (normalized) with a descriptive statistic derived from a specified set of values. The specified set of values is given with the parameter “NormalizationScope” in computation specification.

At the normalization stage each time series is associated with an entity ID and a variable.

Normalization is done at three different scopes: “entity”, “attribute”, and “variable”.

Given a time series T(i,var) corresponding to entity ID i and a variable var we define the normalization values for the different scopes in the following ways.

  • Normalization with scope “entity” means that the descriptive statistic is derived from the values of T(i,var) only.

  • Normalization with scope attribute means that

    • from the entity attributes dataset we find attribute value that corresponds to i,

    • next we find all entity ID’s that are associated with the same attribute value,

    • next we find value of normalization descriptive statistic using the time series that correspond to the variable var and the entity ID’s found in the previous step.

  • Normalization with scope “variable” means that the descriptive statistic is derived from the values of all time series corresponding to var.

Note that the scope “entity” is the most granular, and the scope “variable” is the coarsest.

The following command demonstrates the normalization effect – compare the y-axes scales of the time series corresponding to the same entity-variable pair.

p =
  p⟹
   ERTMonEchoFunctionContext[DateListPlot /@ #timeSeries[[{1, 3, 5}]] &]⟹
   ERTMonNormalize⟹
   ERTMonEchoFunctionContext[DateListPlot /@ #timeSeries[[{1, 3, 5}]] &];
   
ERTMon-normalization
ERTMon-normalization

Here are the normalization values that should be used when normalizing “unseen data.”

p⟹ERTMonTakeNormalizationValues

(* <|{"Humidity.Range", "Country", "USA"} -> 0.0864597, 
  {"Humidity.Range", "Country", "UK"} -> 0.066, 
  {"Humidity.Range", "Country", "Australia"} -> 0.145968, 
  {"MaxTemperature.Range", "Country", "USA"} -> 2.85468, 
  {"MaxTemperature.Range", "Country", "UK"} -> 78/31, 
  {"MaxTemperature.Range", "Country", "Australia"} -> 3.28871, 
  {"Pressure.Range", "Country", "USA"} -> 2.08222, 
  {"Pressure.Range", "Country", "Australia"} -> 3.33871, 
  {"Temperature.Range", "Country", "USA"} -> 2.14411, 
  {"Temperature.Range", "Country", "UK"} -> 1.25806, 
  {"Temperature.Range", "Country", "Australia"} -> 2.73032, 
  {"WindSpeed.Range", "Country", "USA"} -> 4.13532, 
  {"WindSpeed.Range", "Country", "UK"} -> 3.62097, 
  {"WindSpeed.Range", "Country", "Australia"} -> 3.17226|> *)

Making contingency matrices

One of the main goals of ERTMon is to produce contingency matrices corresponding to the event records data.

The contingency matrices are created and stored as SSparseMatrix objects, [AAp7].

p =
  p⟹ERTMonMakeContingencyMatrices;

We can obtain an association of the contingency matrices for each variable-and-aggregation-function pair, or obtain the overall contingency matrix.

p⟹ERTMonTakeContingencyMatrices
Dimensions /@ %
ERTMon-contingency-matrices
ERTMon-contingency-matrices
smat = p⟹ERTMonTakeContingencyMatrix;
MatrixPlot[smat, ImageSize -> 700]
ERTMon-contingency-matrix
ERTMon-contingency-matrix
RowNames[smat]

(* {"EGLC", "KGEU", "KMDW", "KMFL", "KNIP", "WMO95866"} *)

Larger example pipeline

The pipeline shown in this section utilizes all main workflow functions of ERTMon. The used weather data and computation specification are described above.

ERTMon-large-pipeline-example
ERTMon-large-pipeline-example
ERTMon-large-pipeline-example-output
ERTMon-large-pipeline-example-output

References

Packages

[AAp1] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub*. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

[AAp2] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub*. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

[AAp3] Anton Antonov, Monadic Event Records Transformations Mathematica package, (2018), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicEventRecordsTransformations.m .

[AAp4] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

[AAp5] Anton Antonov, Cross tabulation implementation in Mathematica, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/CrossTabulate.m .

[AAp6] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/OutlierIdentifiers.m.

[AAp7] Anton Antonov, SSparseMatrix Mathematica package, (2018), MathematicaForPrediction at GitHub*. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/SSparseMatrix.m .

[AAp8] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

[AAp9] Anton Antonov, Weather event records data Mathematica package, (2018), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/Misc/WeatherEventRecords.m .

Documents

[AA1] Anton Antonov, Monad code generation and extension, (2017), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2017/06/23/monad-code-generation-and-extension .

[AA1a] Anton Antonov, Monad code generation and extension, (2017), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, A monad for classification workflows, (2018), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2018/05/15/a-monad-for-classification-workflows .

Advertisements

The Great conversation in USA presidential speeches

Introduction

This document shows a way to chart in Mathematica / WL the evolution of topics in collections of texts. The making of this document (and related code) is primarily motivated by the fascinating concept of the Great Conversation, [Wk1, MA1]. In brief, all western civilization books are based on 103 great ideas; if we find the great ideas each significant book is based on we can construct a time-line (spanning centuries) of the great conversation between the authors; see [MA1, MA2, MA3].

Instead of finding the great ideas in a text collection we extract topics statistically, using dimension reduction with Non-Negative Matrix Factorization (NNMF), [AAp3, AA1, AA2].

The presented computational results are based on the text collections of State of the Union speeches of USA presidents [D2]. The code in this document can be easily configured to use the much smaller text collection [D1] available online and in Mathematica/WL. (The collection [D1] is fairly small, 51 documents; the collection [D2] is much larger, 2453 documents.)

The procedures (and code) described in this document, of course, work on other types of text collections. For example: movie reviews, podcasts, editorial articles of a magazine, etc.

A secondary objective of this document is to illustrate the use of the monadic programming pipeline as a Software design pattern, [AA3]. In order to make the code concise in this document I wrote the package MonadicLatentSemanticAnalysis.m, [AAp5]. Compare with the code given in [AA1].

The very first version of this document was written for the 2017 summer course “Data Science for the Humanities” at the University of Oxford, UK.

Outline of the procedure applied

The procedure described in this document has the following steps.

  1. Get a collection of documents with known dates of publishing.
    • Or other types of tags associated with the documents.
  2. Do preliminary analysis of the document collection.
    • Number of documents; number of unique words.

    • Number of words per document; number of documents per word.

    • (Some of the statistics of this step are done easier after the Linear vector space representation step.)

  3. Optionally perform Natural Language Processing (NLP) tasks.

    1. Obtain or derive stop words.

    2. Remove stop words from the texts.

    3. Apply stemming to the words in the texts.

  4. Linear vector space representation.

    • This means that we represent the collection with a document-word matrix.

    • Each unique word is a basis vector in that space.

    • For each document the corresponding point in that space is derived from the number of appearances of document’s words.

  5. Extract topics.

    • In this document NNMF is used.

    • In order to obtain better results with NNMF some experimentation and refinements of the topics search have to be done.

  6. Map the documents over the extracted topics.

    • The original matrix of the vector space representation is replaced with a matrix with columns representing topics (instead of words.)
  7. Order the topics according to their presence across the years (or other related tags).
    • This can be done with hierarchical clustering.

    • Alternatively,

      1. for a given topic find the weighted mean of the years of the documents that have that topic, and

      2. order the topics according to those mean values.

  8. Visualize the evolution of the documents according to their topics.

    1. This can be done by simply finding the contingency matrix year vs topic.

    2. For the president speeches we can use the president names for time-line temporal axis instead of years.

      • Because the corresponding time intervals of president office occupation do not overlap.

Remark: Some of the functions used in this document combine several steps into one function call (with corresponding parameters.)

Packages

This loads the packages [AAp1-AAp8]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicLatentSemanticAnalysis.m"];
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/Misc/HeatmapPlot.m"];
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/Misc/RSparseMatrix.m"];

(Note that some of the packages that are imported automatically by [AAp5].)

The functions of the central package in this document, [AAp5], have the prefix “LSAMon”. Here is a sample of those names:

Short@Names["LSAMon*"]

(* {"LSAMon", "LSAMonAddToContext", "LSAMonApplyTermWeightFunctions", <>, "LSAMonUnit", "LSAMonUnitQ", "LSAMonWhen"} *)

Data load

In this section we load a text collection from a specified source.

The text collection from “Presidential Nomination Acceptance Speeches”, [D1], is small and can be used for multiple code verifications and re-runnings. The “State of Union addresses of USA presidents” text collection from [D2] was converted to a Mathematica/WL object by Christopher Wolfram (and sent to me in a private communication.) The text collection [D2] provides far more interesting results (and they are shown below.)

If[True,
  speeches = ResourceData[ResourceObject["Presidential Nomination Acceptance Speeches"]];
  names = StringSplit[Normal[speeches[[All, "Person"]]][[All, 2]], "::"][[All, 1]],

  (*ELSE*)
  (*State of the union addresses provided by Christopher Wolfram. *)      
  Get["~/MathFiles/Digital humanities/Presidential speeches/speeches.mx"];
  names = Normal[speeches[[All, "Name"]]];
];

dates = Normal[speeches[[All, "Date"]]];
texts = Normal[speeches[[All, "Text"]]];

Dimensions[speeches]

(* {2453, 4} *)

Basic statistics for the texts

Using different contingency matrices we can derive basic statistical information about the document collection. (The document-word matrix is a contingency matrix.)

First we convert the text data in long-form:

docWordRecords = 
  Join @@ MapThread[
    Thread[{##}] &, {Range@Length@texts, names, 
     DateString[#, {"Year"}] & /@ dates, 
     DeleteStopwords@*TextWords /@ ToLowerCase[texts]}, 1];

Here is a sample of the rows of the long-form:

GridTableForm[RandomSample[docWordRecords, 6], 
 TableHeadings -> {"document index", "name", "year", "word"}]

Here is a summary:

Multicolumn[
 RecordsSummary[docWordRecords, {"document index", "name", "year", "word"}, "MaxTallies" -> 8], 4, Dividers -> All, Alignment -> Top]

Using the long form we can compute the document-word matrix:

ctMat = CrossTabulate[docWordRecords[[All, {1, -1}]]];
MatrixPlot[Transpose@Sort@Map[# &, Transpose[ctMat@"XTABMatrix"]], 
 MaxPlotPoints -> 300, ImageSize -> 800, 
 AspectRatio -> 1/3]

Here is the president-word matrix:

ctMat = CrossTabulate[docWordRecords[[All, {2, -1}]]];
MatrixPlot[Transpose@Sort@Map[# &, Transpose[ctMat@"XTABMatrix"]], MaxPlotPoints -> 300, ImageSize -> 800, AspectRatio -> 1/3]

Here is an alternative way to compute text collection statistics through the document-word matrix computed within the monad LSAMon:

LSAMonUnit[texts]⟹LSAMonEchoTextCollectionStatistics[];

Procedure application

Stop words

Here is one way to obtain stop words:

stopWords = Complement[DictionaryLookup["*"], DeleteStopwords[DictionaryLookup["*"]]];
Length[stopWords]
RandomSample[stopWords, 12]

(* 304 *)

(* {"has", "almost", "next", "WHO", "seeming", "together", "rather", "runners-up", "there's", "across", "cannot", "me"} *)

We can complete this list with additional stop words derived from the collection itself. (Not done here.)

Linear vector space representation and dimension reduction

Remark: In the rest of the document we use “term” to mean “word” or “stemmed word”.

The following code makes a document-term matrix from the document collection, exaggerates the representations of the terms using “TF-IDF”, and then does topic extraction through dimension reduction. The dimension reduction is done with NNMF; see [AAp3, AA1, AA2].

SeedRandom[312]

mObj =
  LSAMonUnit[texts]⟹
   LSAMonMakeDocumentTermMatrix[{}, stopWords]⟹
   LSAMonApplyTermWeightFunctions[]⟹
   LSAMonTopicExtraction[Max[5, Ceiling[Length[texts]/100]], 60, 12, "MaxSteps" -> 6, "PrintProfilingInfo" -> True];

This table shows the pipeline commands above with comments:

Detailed description

The monad object mObj has a context of named values that is an Association with the following keys:

Keys[mObj⟹LSAMonTakeContext]

(* {"texts", "docTermMat", "terms", "wDocTermMat", "W", "H", "topicColumnPositions", "automaticTopicNames"} *)

Let us clarify the values by briefly describing the computational steps.

  1. From texts we derive the document-term matrix \text{docTermMat}\in \mathbb{R}^{m \times n}, where n is the number of documents and m is the number of terms.
    • The terms are words or stemmed words.

    • This is done with LSAMonMakeDocumentTermMatrix.

  2. From docTermMat is derived the (weighted) matrix wDocTermMat using “TF-IDF”.

    • This is done with LSAMonApplyTermWeightFunctions.
  3. Using docTermMat we find the terms that are present in sufficiently large number of documents and their column indices are assigned to topicColumnPositions.

  4. Matrix factorization.

    1. Assign to \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]], \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]]\in \mathbb{R}^{m_1 \times n}, where m_1 = |topicsColumnPositions|.

    2. Compute using NNMF the factorization \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]]\approx H W, where W\in \mathbb{R}^{k \times n}, H\in \mathbb{R}^{k \times m_1}, and k is the number of topics.

    3. The values for the keys “W, “H”, and “topicColumnPositions” are computed and assigned by LSAMonTopicExtraction.

  5. From the top terms of each topic are derived automatic topic names and assigned to the key automaticTopicNames in the monad context.

    • Also done by LSAMonTopicExtraction.

Statistical thesaurus

At this point in the object mObj we have the factors of NNMF. Using those factors we can find a statistical thesaurus for a given set of words. The following code calculates such a thesaurus, and echoes it in a tabulated form.

queryWords = {"arms", "banking", "economy", "education", "freedom", 
   "tariff", "welfare", "disarmament", "health", "police"};

mObj⟹
  LSAMonStatisticalThesaurus[queryWords, 12]⟹
  LSAMonEchoStatisticalThesaurus[];

By observing the thesaurus entries we can see that the words in each entry are semantically related.

Note, that the word “welfare” strongly associates with “[applause]”. The rest of the query words do not, which can be seen by examining larger thesaurus entries:

thRes =
  mObj⟹
   LSAMonStatisticalThesaurus[queryWords, 100]⟹
   LSAMonTakeValue;
Cases[thRes, "[applause]", Infinity]

(* {"[applause]", "[applause]"} *)

The second “[applause]” associated word is “education”.

Detailed description

The statistical thesaurus is computed by using the NNMF’s right factor H.

For a given term, its corresponding column in H is found and the nearest neighbors of that column are found in the space \mathbb{R}^{m_1} using Euclidean norm.

Extracted topics

The topics are the rows of the right factor H of the factorization obtained with NNMF .

Let us tabulate the topics found above with LSAMonTopicExtraction :

mObj⟹ LSAMonEchoTopicsTable["NumberOfTerms" -> 6, "MagnificationFactor" -> 0.8, Appearance -> "Horizontal"];

Map documents over the topics

The function LSAMonTopicsRepresentation finds the top outliers for each row of NNMF’s left factor W. (The outliers are found using the package [AAp4].) The obtained list of indices gives the topic representation of the collection of texts.

Short@(mObj⟹LSAMonTopicsRepresentation[]⟹LSAMonTakeContext)["docTopicIndices"]

{{53}, {47, 53}, {25}, {46}, {44}, {15, 42}, {18}, <>, {30}, {33}, {7, 60}, {22, 25}, {12, 13, 25, 30, 49, 59}, {48, 57}, {14, 41}}

Further we can see that if the documents have tags associated with them — like author names or dates — we can make a contingency matrix of tags vs topics. (See [AAp8, AA4].) This is also done by the function LSAMonTopicsRepresentation that takes tags as an argument. If the tags argument is Automatic, then the tags are simply the document indices.

Here is a an example:

rsmat = mObj⟹LSAMonTopicsRepresentation[Automatic]⟹LSAMonTakeValue;
MatrixPlot[rsmat]

Here is an example of calling the function LSAMonTopicsRepresentation with arbitrary tags.

rsmat = mObj⟹LSAMonTopicsRepresentation[DateString[#, "MonthName"] & /@ dates]⟹LSAMonTakeValue;
MatrixPlot[rsmat]

Note that the matrix plots above are very close to the charting of the Great conversation that we are looking for. This can be made more obvious by observing the row names and columns names in the tabulation of the transposed matrix rsmat:

Magnify[#, 0.6] &@MatrixForm[Transpose[rsmat]]

Charting the great conversation

In this section we show several ways to chart the Great Conversation in the collection of speeches.

There are several possible ways to make the chart: using a time-line plot, using heat-map plot, and using appropriate tabulation (with MatrixForm or Grid).

In order to make the code in this section more concise the package RSparseMatrix.m, [AAp7, AA5], is used.

Topic name to topic words

This command makes an Association between the topic names and the top topic words.

aTopicNameToTopicTable = 
  AssociationThread[(mObj⟹LSAMonTakeContext)["automaticTopicNames"], 
   mObj⟹LSAMonTopicsTable["NumberOfTerms" -> 12]⟹LSAMonTakeValue];

Here is a sample:

Magnify[#, 0.7] &@ aTopicNameToTopicTable[[1 ;; 3]]

Time-line plot

This command makes a contingency matrix between the documents and the topics (as described above):

rsmat = ToRSparseMatrix[mObj⟹LSAMonTopicsRepresentation[Automatic]⟹LSAMonTakeValue]

This time-plot shows great conversation in the USA presidents state of union speeches:

TimelinePlot[
 Association@
  MapThread[
   Tooltip[#2, aTopicNameToTopicTable[#2]] -> dates[[ToExpression@#1]] &, 
   Transpose[RSparseMatrixToTriplets[rsmat]]], 
 PlotTheme -> "Detailed", ImageSize -> 1000, AspectRatio -> 1/2, PlotLayout -> "Stacked"]

The plot is too cluttered, so it is a good idea to investigate other visualizations.

Topic vs president heatmap

We can use the USA president names instead of years in the Great Conversation chart because the USA presidents terms do not overlap.

This makes a contingency matrix presidents vs topics:

rsmat2 = ToRSparseMatrix[
   mObj⟹LSAMonTopicsRepresentation[
     names]⟹LSAMonTakeValue];

Here we compute the chronological order of the presidents based on the dates of their speeches:

nameToMeanYearRules = 
  Map[#[[1, 1]] -> Mean[N@#[[All, 2]]] &, 
   GatherBy[MapThread[List, {names, ToExpression[DateString[#, "Year"]] & /@ dates}], First]];
ordRowInds = Ordering[RowNames[rsmat2] /. nameToMeanYearRules];

This heat-map plot uses the (experimental) package HeatmapPlot.m, [AAp6]:

Block[{m = rsmat2[[ordRowInds, All]]},
 HeatmapPlot[SparseArray[m], RowNames[m], 
  Thread[Tooltip[ColumnNames[m], aTopicNameToTopicTable /@ ColumnNames[m]]],
  DistanceFunction -> {None, Sort}, ImageSize -> 1000, 
  AspectRatio -> 1/2]
 ]

Note the value of the option DistanceFunction: there is not re-ordering of the rows and columns are reordered by sorting. Also, the topics on the horizontal names have tool-tips.

References

Text data

[D1] Wolfram Data Repository, "Presidential Nomination Acceptance Speeches".

[D2] US Presidents, State of the Union Addresses, Trajectory, 2016. ‪ISBN‬1681240009, 9781681240008‬.

[D3] Gerhard Peters, "Presidential Nomination Acceptance Speeches and Letters, 1880-2016", The American Presidency Project.

[D4] Gerhard Peters, "State of the Union Addresses and Messages", The American Presidency Project.

Packages

[AAp1] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub.

[AAp2] Anton Antonov, Implementation of document-term matrix construction and re-weighting functions in Mathematica(2013), MathematicaForPrediction at GitHub.

[AAp3] Anton Antonov, Implementation of the Non-Negative Matrix Factorization algorithm in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp4] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp5] Anton Antonov, Monadic latent semantic analysis Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp6] Anton Antonov, Heatmap plot Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp7] Anton Antonov, RSparseMatrix Mathematica package, (2015), MathematicaForPrediction at GitHub.

[AAp8] Anton Antonov, Cross tabulation implementation in Mathematica, (2017), MathematicaForPrediction at GitHub.

Books and articles

[AA1] Anton Antonov, "Topic and thesaurus extraction from a document collection", (2013), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, "Statistical thesaurus from NPR podcasts", (2013), MathematicaForPrediction at WordPress blog.

[AA3] Anton Antonov, "Monad code generation and extension", (2017), MathematicaForPrediction at GitHub.

[AA4] Anton Antonov, "Contingency tables creation examples", (2016), MathematicaForPrediction at WordPress blog.

[AA5] Anton Antonov, "RSparseMatrix for sparse matrices with named rows and columns", (2015), MathematicaForPrediction at WordPress blog.

[Wk1] Wikipedia entry, Great Conversation.

[MA1] Mortimer Adler, "The Great Conversation Revisited," in The Great Conversation: A Peoples Guide to Great Books of the Western World, Encyclopædia Britannica, Inc., Chicago,1990, p. 28.

[MA2] Mortimer Adler, "Great Ideas".

[MA3] Mortimer Adler, "How to Think About the Great Ideas: From the Great Books of Western Civilization", 2000, Open Court.

Monad code generation and extension

… in Mathematica / Wolfram Language

Anton Antonov

MathematicaForPrediction at GitHub

MathematicaVsR at GitHub

June 2017

Introduction

This document aims to introduce monadic programming in Mathematica / Wolfram Language (WL) in a concise and code-direct manner. The core of the monad codes discussed is simple, derived from the fundamental principles of Mathematica / WL.

The usefulness of the monadic programming approach manifests in multiple ways. Here are a few we are interested in:

  1. easy to construct, read, and modify sequences of commands (pipelines),
  2. easy to program polymorphic behaviour,
  3. easy to program context utilization.

Speaking informally,

  • Monad programming provides an interface that allows interactive, dynamic creation and change of sequentially structured computations with polymorphic and context-aware behavior.

The theoretical background provided in this document is given in the Wikipedia article on Monadic programming, [Wk1], and the article “The essence of functional programming” by Philip Wadler, [H3]. The code in this document is based on the primary monad definition given in [Wk1,H3]. (Based on the “Kleisli triple” and used in Haskell.)

The general monad structure can be seen as:

  1. a software design pattern;
  2. a fundamental programming construct (similar to class in object-oriented programming);
  3. an interface for software types to have implementations of.

In this document we treat the monad structure as a design pattern, [Wk3]. (After reading [H3] point 2 becomes more obvious. A similar in spirit, minimalistic approach to Object-oriented Design Patterns is given in [AA1].)

We do not deal with types for monads explicitly, we generate code for monads instead. One reason for this is the “monad design pattern” perspective; another one is that in Mathematica / WL the notion of algebraic data type is not needed — pattern matching comes from the core “book of replacement rules” principle.

The rest of the document is organized as follows.

1. Fundamental sections The section “What is a monad?” gives the necessary definitions. The section “The basic Maybe monad” shows how to program a monad from scratch in Mathematica / WL. The section “Extensions with polymorphic behavior” shows how extensions of the basic monad functions can be made. (These three sections form a complete read on monadic programming, the rest of the document can be skipped.)

2. Monadic programming in practice The section “Monad code generation” describes packages for generating monad code. The section “Flow control in monads” describes additional, control flow functionalities. The section “General work-flow of monad code generation utilization” gives a general perspective on the use of monad code generation. The section “Software design with monadic programming” discusses (small scale) software design with monadic programming.

3. Case study sections The case study sections “Contextual monad classification” and “Tracing monad pipelines” hopefully have interesting and engaging examples of monad code generation, extension, and utilization.

What is a monad?

The monad definition

In this document a monad is any set of a symbol m and two operators unit and bind that adhere to the monad laws. (See the next sub-section.) The definition is taken from [Wk1] and [H3] and phrased in Mathematica / WL terms in this section. In order to be brief, we deliberately do not consider the equivalent monad definition based on unit, join, and map (also given in [H3].)

Here are operators for a monad associated with a certain symbol M:

  1. monad unit function (“return” in Haskell notation) is Unit[x_] := M[x];
  2. monad bind function (“>>=” in Haskell notation) is a rule like Bind[M[x_], f_] := f[x] with MatchQ[f[x],M[_]] giving True.

Note that:

  • the function Bind unwraps the content of M[_] and gives it to the function f;
  • the functions fi are responsible to return results wrapped with the monad symbol M.

Here is an illustration formula showing a monad pipeline:

Monad-formula-generic

Monad-formula-generic

From the definition and formula it should be clear that if for the result of Bind[_M,f[x]] the test MatchQ[f[x],_M] is True then the result is ready to be fed to the next binding operation in monad’s pipeline. Also, it is clear that it is easy to program the pipeline functionality with Fold:

Fold[Bind, M[x], {f1, f2, f3}]

(* Bind[Bind[Bind[M[x], f1], f2], f3] *)

The monad laws

The monad laws definitions are taken from [H1] and [H3].In the monad laws given below the symbol “⟹” is for monad’s binding operation and ↦ is for a function in anonymous form.

Here is a table with the laws:

Remark: The monad laws are satisfied for every symbol in Mathematica / WL with List being the unit operation and Apply being the binding operation.

Expected monadic programming features

Looking at formula (1) — and having certain programming experiences — we can expect the following features when using monadic programming.

  • Computations that can be expressed with monad pipelines are easy to construct and read.
  • By programming the binding function we can tuck-in a variety of monad behaviours — this is the so called “programmable semicolon” feature of monads.
  • Monad pipelines can be constructed with Fold, but with suitable definitions of infix operators like DoubleLongRightArrow (⟹) we can produce code that resembles the pipeline in formula (1).
  • A monad pipeline can have polymorphic behaviour by overloading the signatures of fi (and if we have to, Bind.)

These points are clarified below. For more complete discussions see [Wk1] or [H3].

The basic Maybe monad

It is fairly easy to program the basic monad Maybe discussed in [Wk1].

The goal of the Maybe monad is to provide easy exception handling in a sequence of chained computational steps. If one of the computation steps fails then the whole pipeline returns a designated failure symbol, say None otherwise the result after the last step is wrapped in another designated symbol, say Maybe.

Here is the special version of the generic pipeline formula (1) for the Maybe monad:

"Monad-formula-maybe"

“Monad-formula-maybe”

Here is the minimal code to get a functional Maybe monad (for a more detailed exposition of code and explanations see [AA7]):

MaybeUnitQ[x_] := MatchQ[x, None] || MatchQ[x, Maybe[___]];

MaybeUnit[None] := None;
MaybeUnit[x_] := Maybe[x];

MaybeBind[None, f_] := None;
MaybeBind[Maybe[x_], f_] := 
  Block[{res = f[x]}, If[FreeQ[res, None], res, None]];

MaybeEcho[x_] := Maybe@Echo[x];
MaybeEchoFunction[f___][x_] := Maybe@EchoFunction[f][x];

MaybeOption[f_][xs_] := 
  Block[{res = f[xs]}, If[FreeQ[res, None], res, Maybe@xs]];

In order to make the pipeline form of the code we write let us give definitions to a suitable infix operator (like “⟹”) to use MaybeBind:

DoubleLongRightArrow[x_?MaybeUnitQ, f_] := MaybeBind[x, f];
DoubleLongRightArrow[x_, y_, z__] := 
  DoubleLongRightArrow[DoubleLongRightArrow[x, y], z];

Here is an example of a Maybe monad pipeline using the definitions so far:

data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};

MaybeUnit[data]⟹(* lift data into the monad *)
 (Maybe@ Join[#, RandomInteger[8, 3]] &)⟹(* add more values *)
 MaybeEcho⟹(* display current value *)
 (Maybe @ Map[If[# < 0.4, None, #] &, #] &)(* map values that are too small to None *)

(* {0.61,0.48,0.92,0.9,0.32,0.11,4,4,0}
 None *)

The result is None because:

  1. the data has a number that is too small, and
  2. the definition of MaybeBind stops the pipeline aggressively using a FreeQ[_,None] test.

Monad laws verification

Let us convince ourselves that the current definition of MaybeBind gives a monad.

The verification is straightforward to program and shows that the implemented Maybe monad adheres to the monad laws.

"Monad-laws-table-Maybe"

“Monad-laws-table-Maybe”

Extensions with polymorphic behavior

We can see from formulas (1) and (2) that the monad codes can be easily extended through overloading the pipeline functions.

For example the extension of the Maybe monad to handle of Dataset objects is fairly easy and straightforward.

Here is the formula of the Maybe monad pipeline extended with Dataset objects:

Here is an example of a polymorphic function definition for the Maybe monad:

MaybeFilter[filterFunc_][xs_] := Maybe@Select[xs, filterFunc[#] &];

MaybeFilter[critFunc_][xs_Dataset] := Maybe@xs[Select[critFunc]];

See [AA7] for more detailed examples of polymorphism in monadic programming with Mathematica / WL.

A complete discussion can be found in [H3]. (The main message of [H3] is the poly-functional and polymorphic properties of monad implementations.)

Polymorphic monads in R’s dplyr

The R package dplyr, [R1], has implementations centered around monadic polymorphic behavior. The command pipelines based on dplyrcan work on R data frames, SQL tables, and Spark data frames without changes.

Here is a diagram of a typical work-flow with dplyr:

"dplyr-pipeline"

The diagram shows how a pipeline made with dplyr can be re-run (or reused) for data stored in different data structures.

Monad code generation

We can see monad code definitions like the ones for Maybe as some sort of initial templates for monads that can be extended in specific ways depending on their applications. Mathematica / WL can easily provide code generation for such templates; (see [WL1]). As it was mentioned in the introduction, we do not deal with types for monads explicitly, we generate code for monads instead.

In this section are given examples with packages that generate monad codes. The case study sections have examples of packages that utilize generated monad codes.

Maybe monads code generation

The package [AA2] provides a Maybe code generator that takes as an argument a prefix for the generated functions. (Monad code generation is discussed further in the section “General work-flow of monad code generation utilization”.)

Here is an example:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MaybeMonadCodeGenerator.m"]

GenerateMaybeMonadCode["AnotherMaybe"]

data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};

AnotherMaybeUnit[data]⟹(* lift data into the monad *)
 (AnotherMaybe@Join[#, RandomInteger[8, 3]] &)⟹(* add more values *)
 AnotherMaybeEcho⟹(* display current value *)
 (AnotherMaybe @ Map[If[# < 0.4, None, #] &, #] &)(* map values that are too small to None *)

(* {0.61,0.48,0.92,0.9,0.32,0.11,8,7,6}
   AnotherMaybeBind: Failure when applying: Function[AnotherMaybe[Map[Function[If[Less[Slot[1], 0.4], None, Slot[1]]], Slot[1]]]]
   None *)

We see that we get the same result as above (None) and a message prompting failure.

State monads code generation

The State monad is also basic and its programming in Mathematica / WL is not that difficult. (See [AA3].)

Here is the special version of the generic pipeline formula (1) for the State monad:

"Monad-formula-State"

“Monad-formula-State”

Note that since the State monad pipeline caries both a value and a state, it is a good idea to have functions that manipulate them separately. For example, we can have functions for context modification and context retrieval. (These are done in [AA3].)

This loads the package [AA3]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/StateMonadCodeGenerator.m"]

This generates the State monad for the prefix “StMon”:

GenerateStateMonadCode["StMon"]

The following StMon pipeline code starts with a random matrix and then replaces numbers in the current pipeline value according to a threshold parameter kept in the context. Several times are invoked functions for context deposit and retrieval.

SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, {3, 2}], <|"mark" -> "TooSmall", "threshold" -> 0.5|>]⟹
  StMonEchoValue⟹
  StMonEchoContext⟹
  StMonAddToContext["data"]⟹
  StMonEchoContext⟹
  (StMon[#1 /. (x_ /; x < #2["threshold"] :> #2["mark"]), #2] &)⟹
  StMonEchoValue⟹
  StMonRetrieveFromContext["data"]⟹
  StMonEchoValue⟹
  StMonRetrieveFromContext["mark"]⟹
  StMonEchoValue;

(* value: {{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}
   context: <|mark->TooSmall,threshold->0.5|>
   context: <|mark->TooSmall,threshold->0.5,data->{{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}|>
   value: {{0.789884,0.831468},{TooSmall,0.50537},{TooSmall,TooSmall}}
   value: {{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}
   value: TooSmall *)

Flow control in monads

We can implement dedicated functions for governing the pipeline flow in a monad.

Let us look at a breakdown of these kind of functions using the State monad StMon generated above.

Optional acceptance of a function result

A basic and simple pipeline control function is for optional acceptance of result — if failure is obtained applying f then we ignore its result (and keep the current pipeline value.)

Here is an example with StMonOption :

SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, 5]]⟹
 StMonEchoValue⟹
 StMonOption[If[# < 0.3, None] & /@ # &]⟹
 StMonEchoValue

(* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
   value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
   StMon[{0.789884, 0.831468, 0.421298, 0.50537, 0.0375957}, <||>] *)

Without StMonOption we get failure:

SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, 5]]⟹
 StMonEchoValue⟹
 (If[# < 0.3, None] & /@ # &)⟹
 StMonEchoValue

(* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
   StMonBind: Failure when applying: Function[Map[Function[If[Less[Slot[1], 0.3], None]], Slot[1]]]
   None *)

Conditional execution of functions

It is natural to want to have the ability to chose a pipeline function application based on a condition.

This can be done with the functions StMonIfElse and StMonWhen.

SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, 5]]⟹
 StMonEchoValue⟹
 StMonIfElse[
  Or @@ (# < 0.4 & /@ #) &,
  (Echo["A too small value is present.", "warning:"]; 
    StMon[Style[#1, Red], #2]) &,
  StMon[Style[#1, Blue], #2] &]⟹
 StMonEchoValue

 (* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
    warning: A too small value is present.
    value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
    StMon[{0.789884,0.831468,0.421298,0.50537,0.0375957},<||>] *)

Remark: Using flow control functions like StMonIfElse and StMonWhen with appropriate messages is a better way of handling computations that might fail. The silent failures handling of the basic Maybe monad is convenient only in a small number of use cases.

Iterative functions

The last group of pipeline flow control functions we consider comprises iterative functions that provide the functionalities of Nest, NestWhile, FoldList, etc.

In [AA3] these functionalities are provided through the function StMonIterate.

Here is a basic example using Nest that corresponds to Nest[#+1&,1,3]:

StMonUnit[1]⟹StMonIterate[Nest, (StMon[#1 + 1, #2]) &, 3]

(* StMon[4, <||>] *)

Consider this command that uses the full signature of NestWhileList:

NestWhileList[# + 1 &, 1, # < 10 &, 1, 4]

(* {1, 2, 3, 4, 5} *)

Here is the corresponding StMon iteration code:

StMonUnit[1]⟹StMonIterate[NestWhileList, (StMon[#1 + 1, #2]) &, (#[[1]] < 10) &, 1, 4]

(* StMon[{1, 2, 3, 4, 5}, <||>] *)

Here is another results accumulation example with FixedPointList :

StMonUnit[1.]⟹
 StMonIterate[FixedPointList, (StMon[(#1 + 2/#1)/2, #2]) &]

(* StMon[{1., 1.5, 1.41667, 1.41422, 1.41421, 1.41421, 1.41421}, <||>] *)

When the functions NestList, NestWhileList, FixedPointList are used with StMonIterate their results can be stored in the context. Here is an example:

StMonUnit[1.]⟹
 StMonIterate[FixedPointList, (StMon[(#1 + 2/#1)/2, #2]) &, "fpData"]

(* StMon[{1., 1.5, 1.41667, 1.41422, 1.41421, 1.41421, 1.41421}, <|"fpData" -> {StMon[1., <||>], 
    StMon[1.5, <||>], StMon[1.41667, <||>], StMon[1.41422, <||>], StMon[1.41421, <||>], 
    StMon[1.41421, <||>], StMon[1.41421, <||>]} |>] *)

More elaborate tests can be found in [AA8].

Partial pipelines

Because of the associativity law we can design pipeline flows based on functions made of “sub-pipelines.”

fEcho = Function[{x, ct}, StMonUnit[x, ct]⟹StMonEchoValue⟹StMonEchoContext];

fDIter = Function[{x, ct}, 
   StMonUnit[y^x, ct]⟹StMonIterate[FixedPointList, StMonUnit@D[#, y] &, 20]];

StMonUnit[7]⟹fEcho⟹fDIter⟹fEcho;

(*
  value: 7
  context: <||>
  value: {y^7,7 y^6,42 y^5,210 y^4,840 y^3,2520 y^2,5040 y,5040,0,0}
  context: <||> *)

General work-flow of monad code generation utilization

With the abilities to generate and utilize monad codes it is natural to consider the following work flow. (Also shown in the diagram below.)

  1. Come up with an idea that can be expressed with monadic programming.
  2. Look for suitable monad implementation.
  3. If there is no such implementation, make one (or two, or five.)
  4. Having a suitable monad implementation, generate the monad code.
  5. Implement additional pipeline functions addressing envisioned use cases.
  6. Start making pipelines for the problem domain of interest.
  7. Are the pipelines are satisfactory? If not go to 5. (Or 2.)

"make-monads"

Monad templates

The template nature of the general monads can be exemplified with the group of functions in the package StateMonadCodeGenerator.m, [4].

They are in five groups:

  1. base monad functions (unit testing, binding),
  2. display of the value and context,
  3. context manipulation (deposit, retrieval, modification),
  4. flow governing (optional new value, conditional function application, iteration),
  5. other convenience functions.

We can say that all monad implementations will have their own versions of these groups of functions. The more specialized monads will have functions specific to their intended use. Such special monads are discussed in the case study sections.

Software design with monadic programming

The application of monadic programming to a particular problem domain is very similar to designing a software framework or designing and implementing a Domain Specific Language (DSL).

The answers of the question “When to use monadic programming?” can form a large list. This section provides only a couple of general, personal viewpoints on monadic programming in software design and architecture. The principles of monadic programming can be used to build systems from scratch (like Haskell and Scala.) Here we discuss making specialized software with or within already existing systems.

Framework design

Software framework design is about architectural solutions that capture the commonality and variability in a problem domain in such a way that: 1) significant speed-up can be achieved when making new applications, and 2) a set of policies can be imposed on the new applications.

The rigidness of the framework provides and supports its flexibility — the framework has a backbone of rigid parts and a set of “hot spots” where new functionalities are plugged-in.

Usually Object-Oriented Programming (OOP) frameworks provide inversion of control — the general work-flow is already established, only parts of it are changed. (This is characterized with “leave the driving to us” and “don’t call us we will call you.”)

The point of utilizing monadic programming is to be able to easily create different new work-flows that share certain features. (The end user is the driver, on certain rail paths.)

In my opinion making a software framework of small to moderate size with monadic programming principles would produce a library of functions each with polymorphic behaviour that can be easily sequenced in monadic pipelines. This can be contrasted with OOP framework design in which we are more likely to end up with backbone structures that (i) are static and tree-like, and (ii) are extended or specialized by plugging-in relevant objects. (Those plugged-in objects themselves can be trees, but hopefully short ones.)

DSL development

Given a problem domain the general monad structure can be used to shape and guide the development of DSLs for that problem domain.

Generally, in order to make a DSL we have to choose the language syntax and grammar. Using monadic programming the syntax and grammar commands are clear. (The monad pipelines are the commands.) What is left is “just” the choice of particular functions and their implementations.

Another way to develop such a DSL is through a grammar of natural language commands. Generally speaking, just designing the grammar — without developing the corresponding interpreters — would be very helpful in figuring out the components at play. Monadic programming meshes very well with this approach and applying the two approaches together can be very fruitful.

Contextual monad classification (case study)

In this section we show an extension of the State monad into a monad aimed at machine learning classification work-flows.

Motivation

We want to provide a DSL for doing machine learning classification tasks that allows us:

  1. to do basic summarization and visualization of the data,
  2. to control splitting of the data into training and testing sets;
  3. to apply the built-in classifiers;
  4. to apply classifier ensembles (see [AA9] and [AA10]);
  5. to evaluate classifier performances with standard measures and
  6. ROC plots.

Also, we want the DSL design to provide clear directions how to add (hook-up or plug-in) new functionalities.

The package [AA4] discussed below provides such a DSL through monadic programming.

Package and data loading

This loads the package [AA4]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicContextualClassification.m"]

This gets some test data (the Titanic dataset):

dataName = "Titanic";
ds = Dataset[Flatten@*List @@@ ExampleData[{"MachineLearning", dataName}, "Data"]];
varNames = Flatten[List @@ ExampleData[{"MachineLearning", dataName}, "VariableDescriptions"]];
varNames = StringReplace[varNames, "passenger" ~~ (WhitespaceCharacter ..) -> ""];
If[dataName == "FisherIris", varNames = Most[varNames]];
ds = ds[All, AssociationThread[varNames -> #] &];

Monad design

The package [AA4] provides functions for the monad ClCon — the functions implemented in [AA4] have the prefix “ClCon”.

The classifier contexts are Association objects. The pipeline values can have the form:

ClCon[ val, context:(_String|_Association) ]

The ClCon specific monad functions deposit or retrieve values from the context with the keys: “trainData”, “testData”, “classifier”. The general idea is that if the current value of the pipeline cannot provide all arguments for a ClCon function, then the needed arguments are taken from the context. If that fails, then an message is issued. This is illustrated with the following pipeline with comments example.

"ClCon-basic-example"

The pipeline and results above demonstrate polymorphic behaviour over the classifier variable in the context: different functions are used if that variable is a ClassifierFunction object or an association of named ClassifierFunction objects.

Note the demonstrated granularity and sequentiality of the operations coming from using a monad structure. With those kind of operations it would be easy to make interpreters for natural language DSLs.

Another usage example

This monadic pipeline in this example goes through several stages: data summary, classifier training, evaluation, acceptance test, and if the results are rejected a new classifier is made with a different algorithm using the same data splitting. The context keeps track of the data and its splitting. That allows the conditional classifier switch to be concisely specified.

First let us define a function that takes a Classify method as an argument and makes a classifier and calculates performance measures.

ClSubPipe[method_String] :=
  Function[{x, ct},
   ClConUnit[x, ct]⟹
    ClConMakeClassifier[method]⟹
    ClConEchoFunctionContext["classifier:", 
     ClassifierInformation[#["classifier"], Method] &]⟹
    ClConEchoFunctionContext["training time:", ClassifierInformation[#["classifier"], "TrainingTime"] &]⟹
    ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall"}]⟹
    ClConEchoValue⟹
    ClConEchoFunctionContext[
     ClassifierMeasurements[#["classifier"], 
     ClConToNormalClassifierData[#["testData"]], "ROCCurve"] &]
   ];

Using the sub-pipeline function ClSubPipe we make the outlined pipeline.

SeedRandom[12]
res =
  ClConUnit[ds]⟹
   ClConSplitData[0.7]⟹
   ClConEchoFunctionValue["summaries:", ColumnForm[Normal[RecordsSummary /@ #]] &]⟹
   ClConEchoFunctionValue["xtabs:", 
    MatrixForm[CrossTensorate[Count == varNames[[1]] + varNames[[-1]], #]] & /@ # &]⟹
   ClSubPipe["LogisticRegression"]⟹
   (If[#1["Accuracy"] > 0.8,
      Echo["Good accuracy!", "Success:"]; ClConFail,
      Echo["Make a new classifier", "Inaccurate:"]; 
      ClConUnit[#1, #2]] &)⟹
   ClSubPipe["RandomForest"];

"ClCon-pipeline-2-output"

Tracing monad pipelines (case study)

The monadic implementations in the package MonadicTracing.m, [AA5] allow tracking of the pipeline execution of functions within other monads.

The primary reason for developing the package was the desire to have the ability to print a tabulated trace of code and comments using the usual monad pipeline notation. (I.e. without conversion to strings etc.)

It turned out that by programming MonadicTracing.m I came up with a monad transformer; see [Wk2], [H2].

Package loading

This loads the package [AA5]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

Usage example

This generates a Maybe monad to be used in the example (for the prefix “Perhaps”):

GenerateMaybeMonadCode["Perhaps"]
GenerateMaybeMonadSpecialCode["Perhaps"]

In following example we can see that pipeline functions of the Perhaps monad are interleaved with comment strings. Producing the grid of functions and comments happens “naturally” with the monad function TraceMonadEchoGrid.

data = RandomInteger[10, 15];

TraceMonadUnit[PerhapsUnit[data]]⟹"lift to monad"⟹
  TraceMonadEchoContext⟹
  PerhapsFilter[# > 3 &]⟹"filter current value"⟹
  PerhapsEcho⟹"display current value"⟹
  PerhapsWhen[#[[3]] > 3 &, 
   PerhapsEchoFunction[Style[#, Red] &]]⟹
  (Perhaps[#/4] &)⟹
  PerhapsEcho⟹"display current value again"⟹
  TraceMonadEchoGrid[Grid[#, Alignment -> Left] &];

Note that :

  1. the tracing is initiated by just using TraceMonadUnit;
  2. pipeline functions (actual code) and comments are interleaved;
  3. putting a comment string after a pipeline function is optional.

Another example is the ClCon pipeline in the sub-section “Monad design” in the previous section.

Summary

This document presents a style of using monadic programming in Wolfram Language (Mathematica). The style has some shortcomings, but it definitely provides convenient features for day-to-day programming and in coming up with architectural designs.

The style is based on WL’s basic language features. As a consequence it is fairly concise and produces light overhead.

Ideally, the packages for the code generation of the basic Maybe and State monads would serve as starting points for other more general or more specialized monadic programs.

References

Monadic programming

[Wk1] Wikipedia entry: Monad (functional programming), URL: https://en.wikipedia.org/wiki/Monad_(functional_programming) .

[Wk2] Wikipedia entry: Monad transformer, URL: https://en.wikipedia.org/wiki/Monad_transformer .

[Wk3] Wikipedia entry: Software Design Pattern, URL: https://en.wikipedia.org/wiki/Software_design_pattern .

[H1] Haskell.org article: Monad laws, URL: https://wiki.haskell.org/Monad_laws.

[H2] Sheng Liang, Paul Hudak, Mark Jones, “Monad transformers and modular interpreters”, (1995), Proceedings of the 22nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages. New York, NY: ACM. pp. 333[Dash]343. doi:10.1145/199448.199528.

[H3] Philip Wadler, “The essence of functional programming”, (1992), 19’th Annual Symposium on Principles of Programming Languages, Albuquerque, New Mexico, January 1992.

R

[R1] Hadley Wickham et al., dplyr: A Grammar of Data Manipulation, (2014), tidyverse at GitHub, URL: https://github.com/tidyverse/dplyr . (See also, http://dplyr.tidyverse.org .)

Mathematica / Wolfram Language

[WL1] Leonid Shifrin, “Metaprogramming in Wolfram Language”, (2012), Mathematica StackExchange. (Also posted at Wolfram Community in 2017.) URL of the Mathematica StackExchange answer: https://mathematica.stackexchange.com/a/2352/34008 . URL of the Wolfram Community post: http://community.wolfram.com/groups/-/m/t/1121273 .

MathematicaForPrediction

[AA1] Anton Antonov, “Implementation of Object-Oriented Programming Design Patterns in Mathematica”, (2016) MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction.

[AA2] Anton Antonov, Maybe monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MaybeMonadCodeGenerator.m .

[AA3] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

[AA4] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

[AA5] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

[AA6] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

[AA7] Anton Antonov, Simple monadic programming, (2017), MathematicaForPrediction at GitHub. (Preliminary version, 40% done.) URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/Documentation/Simple-monadic-programming.pdf .

[AA8] Anton Antonov, Generated State Monad Mathematica unit tests, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/GeneratedStateMonadTests.m .

[AA9] Anton Antonov, Classifier ensembles functions Mathematica package, (2016), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m .

[AA10] Anton Antonov, “ROC for classifier ensembles, bootstrapping, damaging, and interpolation”, (2016), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/ .

Object-Oriented Design Patterns in Mathematica

Introduction

In this blog post I would like to proclaim a recent completion of the first version of a document describing how to implement the most important (in my view) Object-Oriented Programming Designed Patterns by GoF.

Here is the link to the document in MathematicaForPrediction at GitHub:

“Implementation of Object-Oriented Programming Design Patterns in Mathematica”  , [1].

That document presents a particular style of programming in Mathematica (Wolfram Language) that allows the application of the Object-Oriented Programming (OOP) paradigm. The approach does not require the use of preliminary implementations, packages, or extra code. Using the OOP paradigm is achieved by following a specific programming style and conventions with native, fundamental programming constructs in Mathematica’s programming language.

A side product of working on this document is a Mathematica package for creating UML diagrams proclaimed in previous post, “UML diagram creation and generation”.

Below are several topics I consider most important not covered or briefly considered in that document.

Related presentation

Here is a video recording of my presentation “Object Oriented Design Patterns” at the Wolfram Technology Conference 2015. The presentation recording is also uploaded at YouTube.

Here is link to the presentation notebook : “Object-Oriented Design Patterns with Wolfram Language”.

Why use design patterns in Mathematica

For large development projects it is a good idea to use the well established, understood, and documented Design Patterns. Design Patterns help overcome limitations of programming languages, give higher level abstractions for program design, and provide design transformation guidance. Because of extensive documentation and examples, Design Patterns help knowledge transfer and communication between developers.

Because of these observations it is much better to emulate OOP in Mathematica through Design Patterns than through emulation of OOP objects. (The latter is done in all other approaches and projects I have seen.)

As it was said above the proposed method is minimalistic: native language features of Mathematica are used.

The larger context of Design Patterns

This diagram shows the large context of patterns:

VennDiagramForPatternsInGeneral-smallThe Mathematica implementations discussed in [1]  are for “Design Patterns by GoF”. One of the design patterns “Interpreter” is extended with the package FunctionalParsers.m that uses the so called “monadic programming”. (Hence the overlap of “Design Patterns by GoF” with “Functional programming patterns”.)

In order to provide a feel for the larger context in the diagram I have referenced the book “Go Rin no Sho”(“The book of five scrolls”) by Miyamoto Musashi. We can say that the book contains patterns applicable in antagonistic conflicts. It presents the patterns in abstract forms applicable to person-to-person fights, battles between armies, or other antagonistic settings. Another interesting idea in this book is that if you practice the explained strategy with a samurai sword you will become capable applying that strategy when leading an army.

Personal experiences with design patterns

I was introduced to OOP Design Patterns in the conferences ECOOP’99. At the time I was working on my Ph.D. in the field of large scale air pollution simulations.

Air pollution simulations over continents, like Europe, are grand challenge problems that encompass several scientific sub-cultures: computational fluid dynamics, computational chemistry, computational geometry, and parallel programming. I did not want to just a produce an OOP framework for addressing this problem — I wanted to produce the best OOP framework for the set of adopted methods to solve these kind of problems.

One way to design such a framework is to use Design Patterns and did so using C++. Because I wanted to bring sound arguments during my Ph.D. defense that I derived one of the best possible designs, I had to use some formal method for judging the designs made with Design Patterns. I introduced the relational algebra of the Database theory into the OOP Design Patterns, and I was able to come up with some sort of proof why the framework written is designed well. More practically this was proven by developing, running, and obtaining results with different numerical methods for the air-pollution problems.

In my Ph.D. thesis I showed how to prove that Design Patterns provide robust code construction through the theory Relational Databases, [2].

Justifying Design Patterns with the theory of Relational Databases

One of the key concepts and goals in OOP is reuse. In OOP when developing software we do not want changes in one functional part to bring domino effect changes in other parts. Design Patterns help with that. Relational Databases were developed with similar goals in mind — we want the data changes to be confined to only one relevant place. We can view OOP code as a database, the signatures of the functions being the identifiers and the function bodies being the data. If we bring that code-database into a third normal form we are going to achieve the stability in respect to changes desired in OOP. We can interpret the individual Design Patterns as bringing the code into such third normal forms for the satisfaction of different types of anticipated code changes.

References

[1] Anton Antonov, “Implementation of Object-Oriented Programming Design Patterns in Mathematica”, (2016), MathematicaForPrediction project at GitHub.

[2] Anton Antonov, Object-oriented framework for large scale air pollution models, 2001. Ph.D. thesis, Informatics and Mathematical Modelling, Technical University of Denmark, DTU.