A monad for Latent Semantic Analysis workflows

Introduction

In this document we describe the design and implementation of a (software programming) monad, [Wk1], for Latent Semantic Analysis workflows specification and execution. The design and implementation are done with Mathematica / Wolfram Language (WL).

What is Latent Semantic Analysis (LSA)? : A statistical method (or a technique) for finding relationships in natural language texts that is based on the so called Distributional hypothesis, [Wk2, Wk3]. (The Distributional hypothesis can be simply stated as “linguistic items with similar distributions have similar meanings”; for an insightful philosophical and scientific discussion see [MS1].) LSA can be seen as the application of Dimensionality reduction techniques over matrices derived with the Vector space model.

The goal of the monad design is to make the specification of LSA workflows (relatively) easy and straightforward by following a certain main scenario and specifying variations over that scenario.

The monad is named LSAMon and it is based on the State monad package “StateMonadCodeGenerator.m”, [AAp1, AA1], the document-term matrix making package “DocumentTermMatrixConstruction.m”, [AAp4, AA2], the Non-Negative Matrix Factorization (NNMF) package “NonNegativeMatrixFactorization.m”, [AAp5, AA2], and the package “SSparseMatrix.m”, [AAp2, AA5], that provides matrix objects with named rows and columns.

The data for this document is obtained from WL’s repository and it is manipulated into a certain ready-to-utilize form (and uploaded to GitHub.)

The monadic programming design is used as a Software Design Pattern. The LSAMon monad can be also seen as a Domain Specific Language (DSL) for the specification and programming of machine learning classification workflows.

Here is an example of using the LSAMon monad over a collection of documents that consists of 233 US state of union speeches.

LSAMon-Introduction-pipeline
LSAMon-Introduction-pipeline
LSAMon-Introduction-pipeline-echos
LSAMon-Introduction-pipeline-echos

The table above is produced with the package “MonadicTracing.m”, [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad LSAMon can be seen as a DSL. Because of this the monad pipelines made with LSAMon are sometimes called “specifications”.

Remark: In this document with “term” we mean “a word, a word stem, or other type of token.”

Remark: LSA and Latent Semantic Indexing (LSI) are considered more or less to be synonyms. I think that “latent semantic analysis” sounds more universal and that “latent semantic indexing” as a name refers to a specific Information Retrieval technique. Below we refer to “LSI functions” like “IDF” and “TF-IDF” that are applied within the generic LSA workflow.

Contents description

The document has the following structure.

  • The sections “Package load” and “Data load” obtain the needed code and data.
  • The sections “Design consideration” and “Monad design” provide motivation and design decisions rationale.

  • The sections “LSAMon overview”, “Monad elements”, and “The utilization of SSparseMatrix objects” provide technical descriptions needed to utilize the LSAMon monad .

    • (Using a fair amount of examples.)
  • The section “Unit tests” describes the tests used in the development of the LSAMon monad.
    • (The random pipelines unit tests are especially interesting.)
  • The section “Future plans” outlines future directions of development.
  • The section “Implementation notes” just says that LSAMon’s development process and this document follow the ones of the classifications workflows monad ClCon, [AA6].

Remark: One can read only the sections “Introduction”, “Design consideration”, “Monad design”, and “LSAMon overview”. That set of sections provide a fairly good, programming language agnostic exposition of the substance and novel ideas of this document.

Package load

The following commands load the packages [AAp1–AAp7]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicLatentSemanticAnalysis.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

Data load

In this section we load data that is used in the rest of the document. The text data was obtained through WL’s repository, transformed in a certain more convenient form, and uploaded to GitHub.

The text summarization and plots are done through LSAMon, which in turn uses the function RecordsSummary from the package “MathematicaForPredictionUtilities.m”, [AAp7].

Hamlet

textHamlet = 
  ToString /@ 
   Flatten[Import["https://raw.githubusercontent.com/antononcube/MathematicaVsR/master/Data/MathematicaVsR-Data-Hamlet.csv"]];

TakeLargestBy[
 Tally[DeleteStopwords[ToLowerCase[Flatten[TextWords /@ textHamlet]]]], #[[2]] &, 20]

(* {{"ham", 358}, {"lord", 225}, {"king", 196}, {"o", 124}, {"queen", 120}, 
    {"shall", 114}, {"good", 109}, {"hor", 109}, {"come",  107}, {"hamlet", 107}, 
    {"thou", 105}, {"let", 96}, {"thy", 86}, {"pol", 86}, {"like", 81}, {"sir", 75}, 
    {"'t", 75}, {"know", 74}, {"enter", 73}, {"th", 72}} *)

LSAMonUnit[textHamlet]⟹LSAMonMakeDocumentTermMatrix⟹LSAMonEchoDocumentTermMatrixStatistics;
LSAMon-Data-Load-Hamlet-echo
LSAMon-Data-Load-Hamlet-echo

USA state of union speeches

url = "https://github.com/antononcube/MathematicaVsR/blob/master/Data/MathematicaVsR-Data-StateOfUnionSpeeches.JSON.zip?raw=true";
str = Import[url, "String"];
filename = First@Import[StringToStream[str], "ZIP"];
aStateOfUnionSpeeches = Association@ImportString[Import[StringToStream[str], {"ZIP", filename, "String"}], "JSON"];

lsaObj = 
LSAMonUnit[aStateOfUnionSpeeches]⟹
LSAMonMakeDocumentTermMatrix⟹
LSAMonEchoDocumentTermMatrixStatistics["LogBase" -> 10];
LSAMon-Data-Load-StateOfUnionSpeeches-echo
LSAMon-Data-Load-StateOfUnionSpeeches-echo
TakeLargest[ColumnSumsAssociation[lsaObj⟹LSAMonTakeDocumentTermMatrix], 12]

(* <|"government" -> 7106, "states" -> 6502, "congress" -> 5023, 
     "united" -> 4847, "people" -> 4103, "year" -> 4022, 
     "country" -> 3469, "great" -> 3276, "public" -> 3094, "new" -> 3022, 
     "000" -> 2960, "time" -> 2922|> *)

Stop words

In some of the examples below we want to explicitly specify the stop words. Here are stop words derived using the built-in functions DictionaryLookup and DeleteStopwords.

stopWords = Complement[DictionaryLookup["*"], DeleteStopwords[DictionaryLookup["*"]]];

Short[stopWords]

(* {"a", "about", "above", "across", "add-on", "after", "again", <<290>>, 
   "you'll", "your", "you're", "yours", "yourself", "yourselves", "you've" } *)
    

Design considerations

The steps of the main LSA workflow addressed in this document follow.

  1. Get a collection of documents with associated ID’s.

  2. Create a document-term matrix.

    1. Here we apply the Bag-or-words model and Vector space model.
      1. The sequential order of the words is ignored and each document is represented as a point in a multi-dimensional vector space.

      2. That vector space axes correspond to the unique words found in the whole document collection.

    2. Consider the application of stemming rules.

    3. Consider the removal of stop words.

  3. Apply matrix-entries weighting functions.

    1. Those functions come from LSI.

    2. Functions like “IDF”, “TF-IDF”, “GFIDF”.

  4. Extract topics.

    1. One possible statistical way of doing this is with Dimensionality reduction.

    2. We consider using Singular Value Decomposition (SVD) and Non-Negative Matrix Factorization (NNMF).

  5. Make and display the topics table.

  6. Extract and display a statistical thesaurus of selected words.

  7. Map search queries or unseen documents over the extracted topics.

  8. Find the most important documents in the document collection. (Optional.)

The following flow-chart corresponds to the list of steps above.

LSA-worflows
LSA-worflows

In order to address:

  • the introduction of new elements in LSA workflows,

  • workflows elements variability, and

  • workflows iterative changes and refining,

it is beneficial to have a DSL for LSA workflows. We choose to make such a DSL through a functional programming monad, [Wk1, AA1].

Here is a quote from [Wk1] that fairly well describes why we choose to make a classification workflow monad and hints on the desired properties of such a monad.

[…] The monad represents computations with a sequential structure: a monad defines what it means to chain operations together. This enables the programmer to build pipelines that process data in a series of steps (i.e. a series of actions applied to the data), in which each action is decorated with the additional processing rules provided by the monad. […]

Monads allow a programming style where programs are written by putting together highly composable parts, combining in flexible ways the possible actions that can work on a particular type of data. […]

Remark: Note that quote from [Wk1] refers to chained monadic operations as “pipelines”. We use the terms “monad pipeline” and “pipeline” below.

Monad design

The monad we consider is designed to speed-up the programming of LSA workflows outlined in the previous section. The monad is named LSAMon for “Latent Semantic Analysis** Mon**ad”.

We want to be able to construct monad pipelines of the general form:

LSAMon-Monad-Design-formula-1
LSAMon-Monad-Design-formula-1

LSAMon is based on the State monad, [Wk1, AA1], so the monad pipeline form (1) has the following more specific form:

LSAMon-Monad-Design-formula-2
LSAMon-Monad-Design-formula-2

This means that some monad operations will not just change the pipeline value but they will also change the pipeline context.

In the monad pipelines of LSAMon we store different objects in the contexts for at least one of the following two reasons.

  1. The object will be needed later on in the pipeline, or

  2. The object is (relatively) hard to compute.

Such objects are document-term matrix, Dimensionality reduction factors and the related topics.

Let us list the desired properties of the monad.

  • Rapid specification of non-trivial LSA workflows.

  • The monad works with associations with string values, list of strings.

  • The monad use the Linear vector spaces model .

  • The document-term frequency matrix is can be created after removing stop words and/or word stemming.

  • It is easy to specify and apply different LSI weight functions. (Like “IDF” or “GFIDF”.)

  • The monad can do dimension reduction with SVD and NNMF and corresponding matrix factors are retrievable with monad functions.

  • Documents (or query strings) external to the monad a easily mapped into monad’s Linear vector space of terms and the Linear vector space of topics.

  • The monad allows of cursory examination and summarization of the data.

  • The pipeline values can be of different types. Most monad functions modify the pipeline value; some modify the context; some just echo results.

  • It is easy to obtain the pipeline value, context, and different context objects for manipulation outside of the monad.

  • It is easy to tabulate extracted topics and related statistical thesauri.

  • It is easy to specify and apply re-weighting functions for the entries of the document-term contingency matrices.

The LSAMon components and their interactions are fairly simple.

The main LSAMon operations implicitly put in the context or utilize from the context the following objects:

  • document-term matrix,

  • the factors obtained by matrix factorization algorithms,

  • extracted topics.

Note the that the monadic set of types of LSAMon pipeline values is fairly heterogenous and certain awareness of “the current pipeline value” is assumed when composing LSAMon pipelines.

Obviously, we can put in the context any object through the generic operations of the State monad of the package “StateMonadGenerator.m”, [AAp1].

LSAMon overview

When using a monad we lift certain data into the “monad space”, using monad’s operations we navigate computations in that space, and at some point we take results from it.

With the approach taken in this document the “lifting” into the LSAMon monad is done with the function LSAMonUnit. Results from the monad can be obtained with the functions LSAMonTakeValue, LSAMonContext, or with the other LSAMon functions with the prefix “LSAMonTake” (see below.)

Here is a corresponding diagram of a generic computation with the LSAMon monad:

LSAMon-pipeline
LSAMon-pipeline

Remark: It is a good idea to compare the diagram with formulas (1) and (2).

Let us examine a concrete LSAMon pipeline that corresponds to the diagram above. In the following table each pipeline operation is combined together with a short explanation and the context keys after its execution.

Here is the output of the pipeline:

The LSAMon functions are separated into four groups:

  • operations,

  • setters and droppers,

  • takers,

  • State Monad generic functions.

Monad functions interaction with the pipeline value and context

An overview of the those functions is given in the tables in next two sub-sections. The next section, “Monad elements”, gives details and examples for the usage of the LSAMon operations.

LSAMon-Overview-operations-context-interactions-table
LSAMon-Overview-operations-context-interactions-table
LSAMon-Overview-setters-droppers-takers-context-interactions-table
LSAMon-Overview-setters-droppers-takers-context-interactions-table

State monad functions

Here are the LSAMon State Monad functions (generated using the prefix “LSAMon”, [AAp1, AA1].)

LSAMon-Overview-StMon-usage-descriptions-table
LSAMon-Overview-StMon-usage-descriptions-table

Main monad functions

Here are the usage descriptions of the main (not monad-supportive) LSAMon functions, which are explained in detail in the next section.

LSAMon-Overview-operations-usage-descriptions-table
LSAMon-Overview-operations-usage-descriptions-table

Monad elements

In this section we show that LSAMon has all of the properties listed in the previous section.

The monad head

The monad head is LSAMon. Anything wrapped in LSAMon can serve as monad’s pipeline value. It is better though to use the constructor LSAMonUnit. (Which adheres to the definition in [Wk1].)

LSAMon[textHamlet, <||>]⟹LSAMonMakeDocumentTermMatrix[Automatic, Automatic]⟹LSAMonEchoFunctionContext[Short];

Lifting data to the monad

The function lifting the data into the monad QRMon is QRMonUnit.

The lifting to the monad marks the beginning of the monadic pipeline. It can be done with data or without data. Examples follow.

LSAMonUnit[textHamlet]⟹LSAMonMakeDocumentTermMatrix⟹LSAMonTakeDocumentTermMatrix

LSAMonUnit[]⟹LSAMonSetDocuments[textHamlet]⟹LSAMonMakeDocumentTermMatrix⟹LSAMonTakeDocumentTermMatrix

(See the sub-section “Setters, droppers, and takers” for more details of setting and taking values in LSAMon contexts.)

Currently the monad can deal with data in the following forms:

  • vectors of strings,

  • associations with string values.

Generally, WL makes it easy to extract columns datasets order to obtain vectors or matrices, so datasets are not currently supported in LSAMon.

Making of the document-term matrix

As it was mentioned above with “term” we mean “a word or a stemmed word”. Here is are examples of stemmed words.

WordData[#, "PorterStem"] & /@ {"consequential", "constitution", "forcing", ""}

The fundamental model of LSAMon is the so called Vector space model (or the closely related Bag-of-words model.) The document-term matrix is a linear vector space representation of the documents collection. That representation is further used in LSAMon to find topics and statistical thesauri.

Here is an example of ad hoc construction of a document-term matrix using a couple of paragraphs from “Hamlet”.

inds = {10, 19};
aTempText = AssociationThread[inds, textHamlet[[inds]]]

MatrixForm @ CrossTabulate[Flatten[KeyValueMap[Thread[{#1, #2}] &, TextWords /@ ToLowerCase[aTempText]], 1]]

When we construct the document-term matrix we (often) want to stem the words and (almost always) want to remove stop words. LSAMon’s function LSAMonMakeDocumentTermMatrix makes the document-term matrix and takes specifications for stemming and stop words.

lsaObj =
  LSAMonUnit[textHamlet]⟹
   LSAMonMakeDocumentTermMatrix["StemmingRules" -> Automatic, "StopWords" -> Automatic]⟹
   LSAMonEchoFunctionContext[ MatrixPlot[#documentTermMatrix] &]⟹
   LSAMonEchoFunctionContext[TakeLargest[ColumnSumsAssociation[#documentTermMatrix], 12] &];

We can retrieve the stop words used in a monad with the function LSAMonTakeStopWords.

Short[lsaObj⟹LSAMonTakeStopWords]

We can retrieve the stemming rules used in a monad with the function LSAMonTakeStemmingRules.

Short[lsaObj⟹LSAMonTakeStemmingRules]

The specification Automatic for stemming rules uses WordData[#,"PorterStem"]&.

Instead of the options style signature we can use positional signature.

  • Options style: LSAMonMakeDocumentTermMatrix["StemmingRules" -> {}, "StopWords" -> Automatic] .

  • Positional style: LSAMonMakeDocumentTermMatrix[{}, Automatic] .

LSI weight functions

After making the document-term matrix we will most likely apply LSI weight functions, [Wk2], like “GFIDF” and “TF-IDF”. (This follows the “standard” approach used in search engines for calculating weights for document-term matrices; see [MB1].)

Frequency matrix

We use the following definition of the frequency document-term matrix F.

Each entry fij of the matrix F is the number of occurrences of the term j in the document i.

Weights

Each entry of the weighted document-term matrix M derived from the frequency document-term matrix F is expressed with the formula

where gj – global term weight; lij – local term weight; di – normalization weight.

Various formulas exist for these weights and one of the challenges is to find the right combination of them when using different document collections.

Here is a table of weight functions formulas.

LSAMon-LSI-weight-functions-table
LSAMon-LSI-weight-functions-table

Computation specifications

LSAMon function LSAMonApplyTermWeightFunctions delegates the LSI weight functions application to the package “DocumentTermMatrixConstruction.m”, [AAp4].

Here is an example.

lsaHamlet = LSAMonUnit[textHamlet]⟹LSAMonMakeDocumentTermMatrix;
wmat =
  lsaHamlet⟹
   LSAMonApplyTermWeightFunctions["IDF", "TermFrequency", "Cosine"]⟹
   LSAMonTakeWeightedDocumentTermMatrix;

TakeLargest[ColumnSumsAssociation[wmat], 6]

Instead of using the positional signature of LSAMonApplyTermWeightFunctions we can specify the LSI functions using options.

wmat2 =
  lsaHamlet⟹
   LSAMonApplyTermWeightFunctions["GlobalWeightFunction" -> "IDF", "LocalWeightFunction" -> "TermFrequency", "NormalizerFunction" -> "Cosine"]⟹
   LSAMonTakeWeightedDocumentTermMatrix;

TakeLargest[ColumnSumsAssociation[wmat2], 6]

Here we are summaries of the non-zero values of the weighted document-term matrix derived with different combinations of global, local, and normalization weight functions.

Magnify[#, 0.8] &@Multicolumn[Framed /@ #, 6] &@Flatten@
  Table[
   (wmat =
     lsaHamlet⟹
      LSAMonApplyTermWeightFunctions[gw, lw, nf]⟹
      LSAMonTakeWeightedDocumentTermMatrix;
    RecordsSummary[SparseArray[wmat]["NonzeroValues"], 
     List@StringRiffle[{gw, lw, nf}, ", "]]),
   {gw, {"IDF", "GFIDF", "Binary", "None", "ColumnStochastic"}},
   {lw, {"Binary", "Log", "None"}},
   {nf, {"Cosine", "None", "RowStochastic"}}]
AutoCollapse[]
LSAMon-LSI-weight-functions-combinations-application-table
LSAMon-LSI-weight-functions-combinations-application-table

Extracting topics

Streamlining topic extraction is one of the main reasons LSAMon was implemented. The topic extraction correspond to the so called “syntagmatic” relationships between the terms, [MS1].

Theoretical outline

The original weighed document-term matrix M is decomposed into the matrix factors W and H.

M ≈ W.H, W ∈ ℝm × k, H ∈ ℝk × n.

The i-th row of M is expressed with the i-th row of W multiplied by H.

The rows of H are the topics. SVD produces orthogonal topics; NNMF does not.

The i-the document of the collection corresponds to the i-th row W. Finding the Nearest Neighbors (NN’s) of the i-th document using the rows similarity of the matrix W gives document NN’s through topic similarity.

The terms correspond to the columns of H. Finding NN’s based on similarities of H’s columns produces statistical thesaurus entries.

The term groups provided by H’s rows correspond to “syntagmatic” relationships. Using similarities of H’s columns we can produce term clusters that correspond to “paradigmatic” relationships.

Computation specifications

Here is an example using the play “Hamlet” in which we specify additional stop words.

stopWords2 = {"enter", "exit", "[exit", "ham", "hor", "laer", "pol", "oph", "thy", "thee", "act", "scene"};

SeedRandom[2381]
lsaHamlet =
  LSAMonUnit[textHamlet]⟹
   LSAMonMakeDocumentTermMatrix["StemmingRules" -> Automatic, "StopWords" -> Join[stopWords, stopWords2]]⟹
   LSAMonApplyTermWeightFunctions["GlobalWeightFunction" -> "IDF", "LocalWeightFunction" -> "None", "NormalizerFunction" -> "Cosine"]⟹
   LSAMonExtractTopics["NumberOfTopics" -> 12, "MinNumberOfDocumentsPerTerm" -> 10, Method -> "NNMF", "MaxSteps" -> 20]⟹
   LSAMonEchoTopicsTable["NumberOfTableColumns" -> 6, "NumberOfTerms" -> 10];
LSAMon-Extracting-topics-Hamlet-topics-table
LSAMon-Extracting-topics-Hamlet-topics-table

Here is an example using the USA presidents “state of union” speeches.

SeedRandom[7681]
lsaSpeeches =
  LSAMonUnit[aStateOfUnionSpeeches]⟹
   LSAMonMakeDocumentTermMatrix["StemmingRules" -> Automatic,  "StopWords" -> Automatic]⟹
   LSAMonApplyTermWeightFunctions["GlobalWeightFunction" -> "IDF", "LocalWeightFunction" -> "None", "NormalizerFunction" -> "Cosine"]⟹
   LSAMonExtractTopics["NumberOfTopics" -> 36, "MinNumberOfDocumentsPerTerm" -> 40, Method -> "NNMF", "MaxSteps" -> 12]⟹
   LSAMonEchoTopicsTable["NumberOfTableColumns" -> 6, "NumberOfTerms" -> 10];
LSAMon-Extracting-topics-StateOfUnionSpeeches-topics-table
LSAMon-Extracting-topics-StateOfUnionSpeeches-topics-table

Note that in both examples:

  1. stemming is used when creating the document-term matrix,

  2. the default LSI re-weighting functions are used: “IDF”, “None”, “Cosine”,

  3. the dimension reduction algorithm NNMF is used.

Things to keep in mind.

  1. The interpretability provided by NNMF comes at a price.

  2. NNMF is prone to get stuck into local minima, so several topic extractions (and corresponding evaluations) have to be done.

  3. We would get different results with different NNMF runs using the same parameters. (NNMF uses random numbers initialization.)

  4. The NNMF topic vectors are not orthogonal.

  5. SVD is much faster than NNMF, but it topic vectors are hard to interpret.

  6. Generally, the topics derived with SVD are stable, they do not change with different runs with the same parameters.

  7. The SVD topics vectors are orthogonal, which provides for quick to find representations of documents not in the monad’s document collection.

The document-topic matrix W has column names that are automatically derived from the top three terms in each topic.

ColumnNames[lsaHamlet⟹LSAMonTakeW]

(* {"player-plai-welcom", "ro-lord-sir", "laert-king-attend",
    "end-inde-make", "state-room-castl", "daughter-pass-love",
    "hamlet-ghost-father", "father-thou-king",
    "rosencrantz-guildenstern-king", "ophelia-queen-poloniu",
    "answer-sir-mother", "horatio-attend-gentleman"} *)

Of course the row names of H have the same names.

RowNames[lsaHamlet⟹LSAMonTakeH]

(* {"player-plai-welcom", "ro-lord-sir", "laert-king-attend",
    "end-inde-make", "state-room-castl", "daughter-pass-love",
    "hamlet-ghost-father", "father-thou-king",
    "rosencrantz-guildenstern-king", "ophelia-queen-poloniu",
    "answer-sir-mother", "horatio-attend-gentleman"} *)

Extracting statistical thesauri

The statistical thesaurus extraction corresponds to the “paradigmatic” relationships between the terms, [MS1].

Here is an example over the State of Union speeches.

entryWords = {"bank", "war", "economy", "school", "port", "health", "enemy", "nuclear"};

lsaSpeeches⟹
  LSAMonExtractStatisticalThesaurus["Words" -> Map[WordData[#, "PorterStem"] &, entryWords], "NumberOfNearestNeighbors" -> 12]⟹
  LSAMonEchoStatisticalThesaurus;
LSAMon-Extracting-statistical-thesauri-echo
LSAMon-Extracting-statistical-thesauri-echo

In the code above: (i) the options signature style is used, (ii) the statistical thesaurus entry words are stemmed first.

We can also call LSAMonEchoStatisticalThesaurus directly without calling LSAMonExtractStatisticalThesaurus first.

 lsaSpeeches⟹
   LSAMonEchoStatisticalThesaurus["Words" -> Map[WordData[#, "PorterStem"] &, entryWords], "NumberOfNearestNeighbors" -> 12];
LSAMon-Extracting-statistical-thesauri-echo
LSAMon-Extracting-statistical-thesauri-echo

Mapping queries and documents to terms

One of the most natural operations is to find the representation of an arbitrary document (or sentence or a list of words) in monad’s Linear vector space of terms. This is done with the function LSAMonRepresentByTerms.

Here is an example in which a sentence is represented as a one-row matrix (in that space.)

obj =
  lsaHamlet⟹
   LSAMonRepresentByTerms["Hamlet, Prince of Denmark killed the king."]⟹
   LSAMonEchoValue;

Here we display only the non-zero columns of that matrix.

obj⟹
  LSAMonEchoFunctionValue[MatrixForm[Part[#, All, Keys[Select[SSparseMatrix`ColumnSumsAssociation[#], # > 0& ]]]]& ];

Transformation steps

Assume that LSAMonRepresentByTerms is given a list of sentences. Then that function performs the following steps.

1. The sentence is split into a list of words.

2. If monad’s document-term matrix was made by removing stop words the same stop words are removed from the list of words.

3. If monad’s document-term matrix was made by stemming the same stemming rules are applied to the list of words.

4. The LSI global weights and the LSI local weight and normalizer functions are applied to sentence’s contingency matrix.

Equivalent representation

Let us look convince ourselves that documents used in the monad to built the weighted document-term matrix have the same representation as the corresponding rows of that matrix.

Here is an association of documents from monad’s document collection.

inds = {6, 10};
queries = Part[lsaHamlet⟹LSAMonTakeDocuments, inds];
queries
 
(* <|"id.0006" -> "Getrude, Queen of Denmark, mother to Hamlet. Ophelia, daughter to Polonius.", 
     "id.0010" -> "ACT I. Scene I. Elsinore. A platform before the Castle."|> *)

lsaHamlet⟹
  LSAMonRepresentByTerms[queries]⟹
  LSAMonEchoFunctionValue[MatrixForm[Part[#, All, Keys[Select[SSparseMatrix`ColumnSumsAssociation[#], # > 0& ]]]]& ];
LSAMon-Mapping-queries-and-documents-to-topics-query-matrix
LSAMon-Mapping-queries-and-documents-to-topics-query-matrix
lsaHamlet⟹
  LSAMonEchoFunctionContext[MatrixForm[Part[Slot["weightedDocumentTermMatrix"], inds, Keys[Select[SSparseMatrix`ColumnSumsAssociation[Part[Slot["weightedDocumentTermMatrix"], inds, All]], # > 0& ]]]]& ];
LSAMon-Mapping-queries-and-documents-to-topics-context-sub-matrix
LSAMon-Mapping-queries-and-documents-to-topics-context-sub-matrix

Mapping queries and documents to topics

Another natural operation is to find the representation of an arbitrary document (or a list of words) in monad’s Linear vector space of topics. This is done with the function LSAMonRepresentByTopics.

Here is an example.

inds = {6, 10};
queries = Part[lsaHamlet⟹LSAMonTakeDocuments, inds];
Short /@ queries

(* <|"id.0006" -> "Getrude, Queen of Denmark, mother to Hamlet. Ophelia, daughter to Polonius.", 
     "id.0010" -> "ACT I. Scene I. Elsinore. A platform before the Castle."|> *)

lsaHamlet⟹
  LSAMonRepresentByTopics[queries]⟹
  LSAMonEchoFunctionValue[MatrixForm[Part[#, All, Keys[Select[SSparseMatrix`ColumnSumsAssociation[#], # > 0& ]]]]& ];
LSAMon-Mapping-queries-and-documents-to-terms-query-matrix
LSAMon-Mapping-queries-and-documents-to-terms-query-matrix
lsaHamlet⟹
  LSAMonEchoFunctionContext[MatrixForm[Part[Slot["W"], inds, Keys[Select[SSparseMatrix`ColumnSumsAssociation[Part[Slot["W"], inds, All]], # > 0& ]]]]& ];
LSAMon-Mapping-queries-and-documents-to-terms-query-matrix
LSAMon-Mapping-queries-and-documents-to-terms-query-matrix

Theory

In order to clarify what the function LSAMonRepresentByTopics is doing let us go through the formulas it is based on.

The original weighed document-term matrix M is decomposed into the matrix factors W and H.

M ≈ W.H, W ∈ ℝm × k, H ∈ ℝk × n

The i-th row of M is expressed with the i-th row of W multiplied by H.

mi ≈ wi.H.

For a query vector q0 ∈ ℝm we want to find its topics representation vector x ∈ ℝk:

q0 ≈ x.H.

Denote with H( − 1) the inverse or pseudo-inverse matrix of H. We have:

q0.H( − 1) ≈ (x.H).H( − 1) = x.(H.H( − 1)) = x.I,

x ∈ ℝk, H( − 1) ∈ ℝn × k, I ∈ ℝk × k.

In LSAMon for SVD H( − 1) = HT; for NNMF H( − 1) is the pseudo-inverse of H.

The vector x obtained with LSAMonRepresentByTopics.

Tags representation

Sometimes we want to find the topics representation of tags associated with monad’s documents and the tag-document associations are one-to-many. See [AA3].

Let us consider a concrete example – we want to find what topics correspond to the different presidents in the collection of State of Union speeches.

Here we find the document tags (president names in this case.)

tags = StringReplace[
   RowNames[
    lsaSpeeches⟹LSAMonTakeDocumentTermMatrix], 
   RegularExpression[".\\d\\d\\d\\d-\\d\\d-\\d\\d"] -> ""];
Short[tags]

Here is the number of unique tags (president names.)

Length[Union[tags]]
(* 42 *)

Here we compute the tag-topics representation matrix using the function LSAMonRepresentDocumentTagsByTopics.

tagTopicsMat =
 lsaSpeeches⟹
  LSAMonRepresentDocumentTagsByTopics[tags]⟹
  LSAMonTakeValue

Here is a heatmap plot of the tag-topics matrix made with the package “HeatmapPlot.m”, [AAp11].

HeatmapPlot[tagTopicsMat[[All, Ordering@ColumnSums[tagTopicsMat]]], DistanceFunction -> None, ImageSize -> Large]
LSAMon-Tags-representation-heatmap
LSAMon-Tags-representation-heatmap

Finding the most important documents

There are several algorithms we can apply for finding the most important documents in the collection. LSAMon utilizes two types algorithms: (1) graph centrality measures based, and (2) matrix factorization based. With certain graph centrality measures the two algorithms are equivalent. In this sub-section we demonstrate the matrix factorization algorithm (that uses SVD.)

Definition: The most important sentences have the most important words and the most important words are in the most important sentences.

That definition can be used to derive an iterations-based model that can be expressed with SVD or eigenvector finding algorithms, [LE1].

Here we pick an important part of the play “Hamlet”.

focusText = 
  First@Pick[textHamlet, StringMatchQ[textHamlet, ___ ~~ "to be" ~~ __ ~~ "or not to be" ~~ ___, IgnoreCase -> True]];
Short[focusText]

(* "Ham. To be, or not to be- that is the question: Whether 'tis ....y. 
    O, woe is me T' have seen what I have seen, see what I see!" *)

LSAMonUnit[StringSplit[ToLowerCase[focusText], {",", ".", ";", "!", "?"}]]⟹
  LSAMonMakeDocumentTermMatrix["StemmingRules" -> {}, "StopWords" -> Automatic]⟹
  LSAMonApplyTermWeightFunctions⟹
  LSAMonFindMostImportantDocuments[3]⟹
  LSAMonEchoFunctionValue[GridTableForm];
LSAMon-Find-most-important-documents-table
LSAMon-Find-most-important-documents-table

Setters, droppers, and takers

The values from the monad context can be set, obtained, or dropped with the corresponding “setter”, “dropper”, and “taker” functions as summarized in a previous section.

For example:

p = LSAMonUnit[textHamlet]⟹LSAMonMakeDocumentTermMatrix[Automatic, Automatic];

p⟹LSAMonTakeMatrix

If other values are put in the context they can be obtained through the (generic) function LSAMonTakeContext, [AAp1]:

Short@(p⟹QRMonTakeContext)["documents"]
 
(* <|"id.0001" -> "1604", "id.0002" -> "THE TRAGEDY OF HAMLET, PRINCE OF DENMARK", <<220>>, "id.0223" -> "THE END"|> *) 

Another generic function from [AAp1] is LSAMonTakeValue (used many times above.)

Here is an example of the “data dropper” LSAMonDropDocuments:

Keys[p⟹LSAMonDropDocuments⟹QRMonTakeContext]

(* {"documentTermMatrix", "terms", "stopWords", "stemmingRules"} *)

(The “droppers” simply use the state monad function LSAMonDropFromContext, [AAp1]. For example, LSAMonDropDocuments is equivalent to LSAMonDropFromContext[“documents”].)

The utilization of SSparseMatrix objects

The LSAMon monad heavily relies on SSparseMatrix objects, [AAp6, AA5], for internal representation of data and computation results.

A SSparseMatrix object is a matrix with named rows and columns.

Here is an example.

n = 6;
rmat = ToSSparseMatrix[
   SparseArray[{{1, 2} -> 1, {4, 5} -> 1}, {n, n}], 
   "RowNames" -> RandomSample[CharacterRange["A", "Z"], n], 
   "ColumnNames" -> RandomSample[CharacterRange["a", "z"], n]];
MatrixForm[rmat]
LSAMon-The-utilization-of-SSparseMatrix-random-matrix
LSAMon-The-utilization-of-SSparseMatrix-random-matrix

In this section we look into some useful SSparseMatrix idioms applied within LSAMon.

Visualize with sorted rows and columns

In some situations it is beneficial to sort rows and columns of the (weighted) document-term matrix.

docTermMat = 
  lsaSpeeches⟹LSAMonTakeDocumentTermMatrix;
MatrixPlot[docTermMat[[Ordering[RowSums[docTermMat]],  Ordering[ColumnSums[docTermMat]]]], MaxPlotPoints -> 300, ImageSize -> Large]
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeces-docTermMat-plot
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeces-docTermMat-plot

The most popular terms in the document collection can be found through the association of the column sums of the document-term matrix.

TakeLargest[ColumnSumsAssociation[lsaSpeeches⟹LSAMonTakeDocumentTermMatrix], 10]

(* <|"state" -> 8852, "govern" -> 8147, "year" -> 6362, "nation" -> 6182,
     "congress" -> 5040, "unit" -> 5040, "countri" -> 4504, 
     "peopl" -> 4306, "american" -> 3648, "law" -> 3496|> *)
     

Similarly for the lest popular terms.

TakeSmallest[
 ColumnSumsAssociation[
  lsaSpeeches⟹LSAMonTakeDocumentTermMatrix], 10]

(* <|"036" -> 1, "027" -> 1, "_____________________" -> 1, "0111" -> 1, 
     "006" -> 1, "0000" -> 1, "0001" -> 1, "______________________" -> 1, 
     "____" -> 1, "____________________" -> 1|> *)

Showing only non-zero columns

In some cases we want to show only columns of the data or computation results matrices that have non-zero elements.

Here is an example (similar to other examples in the previous section.)

lsaHamlet⟹
  LSAMonRepresentByTerms[{"this country is rotten", 
    "where is my sword my lord", 
    "poison in the ear should be in the play"}]⟹
  LSAMonEchoFunctionValue[ MatrixForm[#1[[All, Keys[Select[ColumnSumsAssociation[#1], #1 > 0 &]]]]] &];
LSAMon-The-utilization-of-SSparseMatrix-lsaHamlet-queries-to-terms-matrix
LSAMon-The-utilization-of-SSparseMatrix-lsaHamlet-queries-to-terms-matrix

In the pipeline code above: (i) from the list of queries a representation matrix is made, (ii) that matrix is assigned to the pipeline value, (iii) in the pipeline echo value function the non-zero columns are selected with by using the keys of the non-zero elements of the association obtained with ColumnSumsAssociation.

Similarities based on representation by terms

Here is way to compute the similarity matrix of different sets of documents that are not required to be in monad’s document collection.

sMat1 =
 lsaSpeeches⟹
  LSAMonRepresentByTerms[ aStateOfUnionSpeeches[[ Range[-5, -2] ]] ]⟹
  LSAMonTakeValue

sMat2 =
 lsaSpeeches⟹
  LSAMonRepresentByTerms[ aStateOfUnionSpeeches[[ Range[-7, -3] ]] ]⟹
  LSAMonTakeValue

MatrixForm[sMat1.Transpose[sMat2]]
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeches-terms-similarities-matrix
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeches-terms-similarities-matrix

Similarities based on representation by topics

Similarly to weighted Boolean similarities matrix computation above we can compute a similarity matrix using the topics representations. Note that an additional normalization steps is required.

sMat1 =
  lsaSpeeches⟹
   LSAMonRepresentByTopics[ aStateOfUnionSpeeches[[ Range[-5, -2] ]] ]⟹
   LSAMonTakeValue;
sMat1 = WeightTermsOfSSparseMatrix[sMat1, "None", "None", "Cosine"]

sMat2 =
  lsaSpeeches⟹
   LSAMonRepresentByTopics[ aStateOfUnionSpeeches[[ Range[-7, -3] ]] ]⟹ 
   LSAMonTakeValue;
sMat2 = WeightTermsOfSSparseMatrix[sMat2, "None", "None", "Cosine"]

MatrixForm[sMat1.Transpose[sMat2]]
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeches-topics-similarities-matrix
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeches-topics-similarities-matrix

Note the differences with the weighted Boolean similarity matrix in the previous sub-section – the similarities that are less than 1 are noticeably larger.

Unit tests

The development of LSAMon was done with two types of unit tests: (i) directly specified tests, [AAp7], and (ii) tests based on randomly generated pipelines, [AA8].

The unit test package should be further extended in order to provide better coverage of the functionalities and illustrate – and postulate – pipeline behavior.

Directly specified tests

Here we run the unit tests file “MonadicLatentSemanticAnalysis-Unit-Tests.wlt”, [AAp8].

AbsoluteTiming[
 testObject = TestReport["~/MathematicaForPrediction/UnitTests/MonadicLatentSemanticAnalysis-Unit-Tests.wlt"]
]

The natural language derived test ID’s should give a fairly good idea of the functionalities covered in [AAp3].

Values[Map[#["TestID"] &, testObject["TestResults"]]]

(* {"LoadPackage", "USASpeechesData", "HamletData", "StopWords", 
    "Make-document-term-matrix-1", "Make-document-term-matrix-2",
    "Apply-term-weights-1", "Apply-term-weights-2", "Topic-extraction-1",
    "Topic-extraction-2", "Topic-extraction-3", "Topic-extraction-4",
    "Statistical-thesaurus-1", "Topics-representation-1",
    "Take-document-term-matrix-1", "Take-weighted-document-term-matrix-1",
    "Take-document-term-matrix-2", "Take-weighted-document-term-matrix-2",
    "Take-terms-1", "Take-Factors-1", "Take-Factors-2", "Take-Factors-3",
    "Take-Factors-4", "Take-StopWords-1", "Take-StemmingRules-1"} *)

Random pipelines tests

Since the monad LSAMon is a DSL it is natural to test it with a large number of randomly generated “sentences” of that DSL. For the LSAMon DSL the sentences are LSAMon pipelines. The package “MonadicLatentSemanticAnalysisRandomPipelinesUnitTests.m”, [AAp9], has functions for generation of LSAMon random pipelines and running them as verification tests. A short example follows.

Generate pipelines:

SeedRandom[234]
pipelines = MakeLSAMonRandomPipelines[100];
Length[pipelines]

(* 100 *)

Here is a sample of the generated pipelines:

LSAMon-Unit-tests-random-pipelines-sample-table
LSAMon-Unit-tests-random-pipelines-sample-table

Here we run the pipelines as unit tests:

AbsoluteTiming[
 res = TestRunLSAMonPipelines[pipelines, "Echo" -> False];
]

From the test report results we see that a dozen tests failed with messages, all of the rest passed.

rpTRObj = TestReport[res]

(The message failures, of course, have to be examined – some bugs were found in that way. Currently the actual test messages are expected.)

Future plans

Dimension reduction extensions

It would be nice to extend the Dimension reduction functionalities of LSAMon to include other algorithms like Independent Component Analysis (ICA), [Wk5]. Ideally with LSAMon we can do comparisons between SVD, NNMF, and ICA like the image de-nosing based comparison explained in [AA8].

Another direction is to utilize Neural Networks for the topic extraction and making of statistical thesauri.

Conversational agent

Since LSAMon is a DSL it can be relatively easily interfaced with a natural language interface.

Here is an example of natural language commands parsed into LSA code using the package [AAp13].

LSAMon-Future-directions-parsed-LSA-commands-table
LSAMon-Future-directions-parsed-LSA-commands-table

Implementation notes

The implementation methodology of the LSAMon monad packages [AAp3, AAp9] followed the methodology created for the ClCon monad package [AAp10, AA6]. Similarly, this document closely follows the structure and exposition of the `ClCon monad document “A monad for classification workflows”, [AA6].

A lot of the functionalities and signatures of LSAMon were designed and programed through considerations of natural language commands specifications given to a specialized conversational agent.

References

Packages

[AAp1] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub*.

[AAp2] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub*.

[AAp3] Anton Antonov, Monadic Latent Semantic Analysis Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp4] Anton Antonov, Implementation of document-term matrix construction and re-weighting functions in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp5] Anton Antonov, Non-Negative Matrix Factorization algorithm implementation in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp6] Anton Antonov, SSparseMatrix Mathematica package, (2018), MathematicaForPrediction at GitHub.

[AAp7] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub.

[AAp8] Anton Antonov, Monadic Latent Semantic Analysis unit tests, (2019), MathematicaVsR at GitHub.

[AAp9] Anton Antonov, Monadic Latent Semantic Analysis random pipelines Mathematica unit tests, (2019), MathematicaForPrediction at GitHub.

[AAp10] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp11] Anton Antonov, Heatmap plot Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp12] Anton Antonov,
Independent Component Analysis Mathematica package, MathematicaForPrediction at GitHub.

[AAp13] Anton Antonov, Latent semantic analysis workflows grammar in EBNF, (2018), ConverasationalAgents at GitHub.

MathematicaForPrediction articles

[AA1] Anton Antonov, “Monad code generation and extension”, (2017), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, “Topic and thesaurus extraction from a document collection”, (2013), MathematicaForPrediction at GitHub.

[AA3] Anton Antonov, “The Great conversation in USA presidential speeches”, (2017), MathematicaForPrediction at WordPress.

[AA4] Anton Antonov, “Contingency tables creation examples”, (2016), MathematicaForPrediction at WordPress.

[AA5] Anton Antonov, “RSparseMatrix for sparse matrices with named rows and columns”, (2015), MathematicaForPrediction at WordPress.

[AA6] Anton Antonov, “A monad for classification workflows”, (2018), MathematicaForPrediction at WordPress.

[AA7] Anton Antonov, “Independent component analysis for multidimensional signals”, (2016), MathematicaForPrediction at WordPress.

[AA8] Anton Antonov, “Comparison of PCA, NNMF, and ICA over image de-noising”, (2016), MathematicaForPrediction at WordPress.

Other

[Wk1] Wikipedia entry, Monad,

[Wk2] Wikipedia entry, Latent semantic analysis,

[Wk3] Wikipedia entry, Distributional semantics,

[Wk4] Wikipedia entry, Non-negative matrix factorization,

[LE1] Lars Elden, Matrix Methods in Data Mining and Pattern Recognition, 2007, SIAM. ISBN-13: 978-0898716269.

[MB1] Michael W. Berry & Murray Browne, Understanding Search Engines: Mathematical Modeling and Text Retrieval, 2nd. ed., 2005, SIAM. ISBN-13: 978-0898715811.

[MS1] Magnus Sahlgren, “The Distributional Hypothesis”, (2008), Rivista di Linguistica. 20 (1): 33[Dash]53.

[PS1] Patrick Scheibe, Mathematica (Wolfram Language) support for IntelliJ IDEA, (2013-2018), Mathematica-IntelliJ-Plugin at GitHub.

Advertisements

QRMon for some credit risk article

Introduction

In this notebook/document we apply the monad QRMon [3] over data of the article [1]. In order to get the data we use extraction procedure described in [2].

(I saw the article [1] while browsing LinkedIn today. I met one of the authors during the event "Data Science Salon Miami Feb 2018".)

Extract data

I extracted the data from the image using "Recovering data points from an image".

img = Import["https://www.spglobal.com/_assets/images/marketintelligence/blog-images/demonstration-of-model-fit-comparison-visualization.png"]
enter image description here

enter image description here

extractedData
(* {{-1., 0.284894}, {-0.987395, 0.340483}, {-0.966387, 
  0.215408}, {-0.941176, 0.416918}, {-0.894958, 0.222356}, {-0.890756,
   0.215408}, {-0.878151, 0.0903323}, {-0.848739, 
  0.132024}, {-0.844538, 0.10423}, {-0.831933, 0.333535}, {-0.819328, 
  0.180665}, {-0.781513, 0.423867}, {-0.756303, 0.40997}, {-0.752101, 
  0.528097}, {-0.747899, 0.416918}, {-0.731092, 0.375227}, {-0.714286,
   0.194562}, {-0.710084, 0.340483}, {-0.651261, 
  0.555891}, {-0.647059, 0.333535}, {-0.605042, 0.496828}, {-0.57563, 
  0.}, {-0.512605, 0.354381}, {-0.491597, 0.368278}, {-0.487395, 
  0.472508}, {-0.478992, 0.479456}, {-0.453782, 0.437764}, {-0.357143,
   0.15287}, {-0.344538, 0.340483}, {-0.331933, 0.333535}, {-0.315126,
   0.500302}, {-0.285714, 0.396073}, {-0.247899, 
  0.618429}, {-0.201681, 0.541994}, {-0.159664, 0.680967}, {-0.10084, 
  1.06314}, {-0.0966387, 0.993656}, {0., 1.36193}, {0.0210084, 
  1.44532}, {0.0420168, 1.5148}, {0.0504202, 1.5148}, {0.0882353, 
  1.41405}, {0.130252, 1.70937}, {0.172269, 2.029}, {0.176471, 
  1.7858}, {0.222689, 2.20272}, {0.226891, 2.23746}, {0.231092, 
  2.23746}, {0.239496, 1.96647}, {0.268908, 1.94562}, {0.273109, 
  1.91088}, {0.277311, 1.91088}, {0.281513, 1.94562}, {0.294118, 
  2.2861}, {0.319328, 2.26526}, {0.327731, 2.3}, {0.432773, 
  1.68157}, {0.462185, 1.86918}, {0.5, 2.00121}} *)

ListPlot[extractedData, PlotRange -> All, PlotTheme -> "Detailed"]
enter image description here

enter image description here

Apply QRMon

Load packages. (For more details see [4].)

Import["https://raw.githubusercontent.com/antononcube/\
MathematicaForPrediction/master/MonadicProgramming/\
MonadicQuantileRegression.m"]
Import["https://raw.githubusercontent.com/antononcube/\
MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

Apply the QRMon workflow within the TraceMonad:

TraceMonadUnit[QRMonUnit[extractedData]]⟹"lift data to the monad"⟹
  QRMonEchoDataSummary⟹"echo data summary"⟹
  QRMonQuantileRegression[12, 0.5]⟹"do Quantile Regression with\nB-spline basis with 12 knots"⟹
  QRMonPlot⟹"plot the data and regression curve"⟹
  QRMonEcho[Style["Tabulate QRMon steps and explanations:", Purple, Bold]]⟹"echo an explanation message"⟹
  TraceMonadEchoGrid;
enter image description here

enter image description here

References

[1] Moody Hadi and Danny Haydon, "A Perspective On Machine Learning In Credit Risk", (2018), S&P Global Market Intelligence.

[2] Andy Ross, answer of "Recovering data points from an image", (2012).

[3] Anton Antonov, "A monad for Quantile Regression workflows", (2018), MathematicaForPrediction at WordPress.

[4] Anton Antonov, "Monad code generation and extension", (2017), MathematicaForPrediction at GitHub*,https://github.com/antononcube/MathematicaForPrediction.

Mathematica-vs-R: Deep learning examples

Introduction

This MathematicaVsR at GitHub project is for the comparison of the Deep Learning functionalities in R/RStudio and Mathematica/Wolfram Language (WL).

The project is aimed to mirror and aid the talk "Deep Learning series (session 2)" of the meetup Orlando Machine Learning and Data Science.

The focus of the talk is R and Keras, so the project structure is strongly influenced by the content of the book Deep learning with R, [1], and the corresponding Rmd notebooks, [2].

Some of Mathematica’s notebooks repeat the material in [2]. Some are original versions.

WL’s Neural Nets framework and abilities are fairly well described in the reference page "Neural Networks in the Wolfram Language overview", [4], and the webinar talks [5].

The corresponding documentation pages [3] (R) and [6] (WL) can be used for a very fruitful comparison of features and abilities.

Remark: With "deep learning with R" here we mean "Keras with R".

Remark: An alternative to R/Keras and Mathematica/MXNet is the library H2O (that has interfaces to Java, Python, R, Scala.) See project’s directory R.H2O for examples.

The presentation

The big picture

Deep learning can be used for both supervised and unsupervised learning. In this project we concentrate on supervised learning.

The following diagram outlines the general, simple classification workflow we have in mind.

simple_classification_workflow

Here is a corresponding classification monadic pipeline in Mathematica:

monadic_pipeline

monadic_pipeline

Code samples

R-Keras uses monadic pipelines through the library magrittr. For example:

model <- keras_model_sequential() 
model %>% 
  layer_dense(units = 256, activation = 'relu', input_shape = c(784)) %>% 
  layer_dropout(rate = 0.4) %>% 
  layer_dense(units = 128, activation = 'relu') %>%
  layer_dropout(rate = 0.3) %>%
  layer_dense(units = 10, activation = 'softmax')

The corresponding Mathematica command is:

model =
 NetChain[{
   LinearLayer[256, "Input" -> 784],
   ElementwiseLayer[Ramp],            
   DropoutLayer[0.4],
   LinearLayer[128],
   ElementwiseLayer[Ramp],            
   DropoutLayer[0.3],
   LinearLayer[10]
 }]

Comparison

Installation

  • Mathematica

  • The neural networks framework comes with Mathematica. (No additional installation required.)

  • R

  • Pretty straightforward using the directions in [3]. (A short list.)

  • Some additional Python installation is required.

Simple neural network classifier over MNIST data

Vector classification

TBD…

Categorical classification

TBD…

Regression

Encoders and decoders

The Mathematica encoders (for neural networks and generally for machine learning tasks) are very well designed and with a very advanced development.

The encoders in R-Keras are fairly useful but not was advanced as those in Mathematica.

[TBD: Encoder correspondence…]

Dealing with over-fitting

Repositories of pre-trained models

Documentation

References

[1] F. Chollet, J. J. Allaire, Deep learning with R, (2018).

[2] J. J. Allaire, Deep Learing with R notebooks, (2018), GitHub.

[3] RStudio, Keras reference.

[4] Wolfram Research, "Neural Networks in the Wolfram Language overview".

[5] Wolfram Research, "Machine Learning Webinar Series".

[6] Wolfram Research, "Neural Networks guide".

Progressive Machine Learning Examples

Introduction

In this MathematicaVsR at GitHub project we show how to do Progressive machine learning using two types of classifiers based on:

  • Tries with Frequencies, [AAp2, AAp3, AA1],

  • Sparse Matrix Recommender framework [AAp4, AA2].

Progressive learning is a type of Online machine learning. For more details see [Wk1]. The Progressive learning problem is defined as follows.

Problem definition:

  • Assume that the data is sequentially available.
    • Meaning, at a given time only part of the data is available, and after a certain time interval new data can be obtained.

    • In view of classification, it is assumed that at a given time not all class labels are presented in the data already obtained.

    • Let us call this a data stream.

  • Make a machine learning algorithm that updates its model continuously or sequentially in time over a given data stream.

    • Let us call such an algorithm a Progressive Learning Algorithm (PLA).

In comparison, the typical (classical) machine learning algorithms assume that representative training data is available and after training that data is no longer needed to make predictions. Progressive machine learning has more general assumptions about the data and its problem formulation is closer to how humans learn to classify objects.

Below we are shown the applications of two types of classifiers as PLA’s. One is based on Tries with Frequencies (TF), [AAp2, AAp3, AA1], the other on an Item-item Recommender (IIR) framework [AAp4, AA2].

Remark: Note that both TF and IIR come from tackling Unsupervised machine learning tasks, but here they are applied in the context of Supervised machine learning.

General workflow

The Mathematica and R notebooks follow the steps in the following flow chart.

"Progressive-machine-learning-with-Tries"

For detailed explanations see any of the notebooks.

Project organization

Mathematica files

R files

Example runs

(For details see Progressive-machine-learning-examples.md.)

Using Tries with Frequencies

Here is an example run with Tries with Frequencies, [AAp2, AA1]:

"PLA-Trie-run"

Here are the obtained ROC curves:

"PLA-Trie-ROCs-thresholds"

We can see that with the Progressive learning process does improve its success rates in time.

Using an Item-item recommender system

Here is an example run with an Item-item recommender system, [AAp4, AA2]:

"PLA-SMR-run"

Here are the obtained ROC curves:

"PLA-SMR-ROCs-thresholds"

References

Packages

[AAp1] Anton Antonov, Obtain and transform Mathematica machine learning data-sets, GetMachineLearningDataset.m, (2018), MathematicaVsR at GitHub.

[AAp2] Anton Antonov, Java tries with frequencies Mathematica package, JavaTriesWithFrequencies.m, (2017), MathematicaForPrediction at GitHub.

[AAp3] Anton Antonov, Tries with frequencies R package, TriesWithFrequencies.R, (2014), MathematicaForPrediction at GitHub.

[AAp4] Anton Antonov, Sparse matrix recommender framework in Mathematica, SparseMatrixRecommenderFramework.m, (2014), MathematicaForPrediction at GitHub.

Articles

[Wk1] Wikipedia entry, Online machine learning.

[AA1] Anton Antonov, "Tries with frequencies in Java", (2017), MathematicaForPrediction at WordPress.

[AA2] Anton Antonov, "A Fast and Agile Item-Item Recommender: Design and Implementation", (2011), Wolfram Technology Conference 2011.

Monad code generation and extension

… in Mathematica / Wolfram Language

Anton Antonov

MathematicaForPrediction at GitHub

MathematicaVsR at GitHub

June 2017

Introduction

This document aims to introduce monadic programming in Mathematica / Wolfram Language (WL) in a concise and code-direct manner. The core of the monad codes discussed is simple, derived from the fundamental principles of Mathematica / WL.

The usefulness of the monadic programming approach manifests in multiple ways. Here are a few we are interested in:

  1. easy to construct, read, and modify sequences of commands (pipelines),
  2. easy to program polymorphic behaviour,
  3. easy to program context utilization.

Speaking informally,

  • Monad programming provides an interface that allows interactive, dynamic creation and change of sequentially structured computations with polymorphic and context-aware behavior.

The theoretical background provided in this document is given in the Wikipedia article on Monadic programming, [Wk1], and the article “The essence of functional programming” by Philip Wadler, [H3]. The code in this document is based on the primary monad definition given in [Wk1,H3]. (Based on the “Kleisli triple” and used in Haskell.)

The general monad structure can be seen as:

  1. a software design pattern;
  2. a fundamental programming construct (similar to class in object-oriented programming);
  3. an interface for software types to have implementations of.

In this document we treat the monad structure as a design pattern, [Wk3]. (After reading [H3] point 2 becomes more obvious. A similar in spirit, minimalistic approach to Object-oriented Design Patterns is given in [AA1].)

We do not deal with types for monads explicitly, we generate code for monads instead. One reason for this is the “monad design pattern” perspective; another one is that in Mathematica / WL the notion of algebraic data type is not needed — pattern matching comes from the core “book of replacement rules” principle.

The rest of the document is organized as follows.

1. Fundamental sections The section “What is a monad?” gives the necessary definitions. The section “The basic Maybe monad” shows how to program a monad from scratch in Mathematica / WL. The section “Extensions with polymorphic behavior” shows how extensions of the basic monad functions can be made. (These three sections form a complete read on monadic programming, the rest of the document can be skipped.)

2. Monadic programming in practice The section “Monad code generation” describes packages for generating monad code. The section “Flow control in monads” describes additional, control flow functionalities. The section “General work-flow of monad code generation utilization” gives a general perspective on the use of monad code generation. The section “Software design with monadic programming” discusses (small scale) software design with monadic programming.

3. Case study sections The case study sections “Contextual monad classification” and “Tracing monad pipelines” hopefully have interesting and engaging examples of monad code generation, extension, and utilization.

What is a monad?

The monad definition

In this document a monad is any set of a symbol m and two operators unit and bind that adhere to the monad laws. (See the next sub-section.) The definition is taken from [Wk1] and [H3] and phrased in Mathematica / WL terms in this section. In order to be brief, we deliberately do not consider the equivalent monad definition based on unit, join, and map (also given in [H3].)

Here are operators for a monad associated with a certain symbol M:

  1. monad unit function (“return” in Haskell notation) is Unit[x_] := M[x];
  2. monad bind function (“>>=” in Haskell notation) is a rule like Bind[M[x_], f_] := f[x] with MatchQ[f[x],M[_]] giving True.

Note that:

  • the function Bind unwraps the content of M[_] and gives it to the function f;
  • the functions fi are responsible to return results wrapped with the monad symbol M.

Here is an illustration formula showing a monad pipeline:

Monad-formula-generic

Monad-formula-generic

From the definition and formula it should be clear that if for the result of Bind[_M,f[x]] the test MatchQ[f[x],_M] is True then the result is ready to be fed to the next binding operation in monad’s pipeline. Also, it is clear that it is easy to program the pipeline functionality with Fold:

Fold[Bind, M[x], {f1, f2, f3}]

(* Bind[Bind[Bind[M[x], f1], f2], f3] *)

The monad laws

The monad laws definitions are taken from [H1] and [H3].In the monad laws given below the symbol “⟹” is for monad’s binding operation and ↦ is for a function in anonymous form.

Here is a table with the laws:

Remark: The monad laws are satisfied for every symbol in Mathematica / WL with List being the unit operation and Apply being the binding operation.

Expected monadic programming features

Looking at formula (1) — and having certain programming experiences — we can expect the following features when using monadic programming.

  • Computations that can be expressed with monad pipelines are easy to construct and read.
  • By programming the binding function we can tuck-in a variety of monad behaviours — this is the so called “programmable semicolon” feature of monads.
  • Monad pipelines can be constructed with Fold, but with suitable definitions of infix operators like DoubleLongRightArrow (⟹) we can produce code that resembles the pipeline in formula (1).
  • A monad pipeline can have polymorphic behaviour by overloading the signatures of fi (and if we have to, Bind.)

These points are clarified below. For more complete discussions see [Wk1] or [H3].

The basic Maybe monad

It is fairly easy to program the basic monad Maybe discussed in [Wk1].

The goal of the Maybe monad is to provide easy exception handling in a sequence of chained computational steps. If one of the computation steps fails then the whole pipeline returns a designated failure symbol, say None otherwise the result after the last step is wrapped in another designated symbol, say Maybe.

Here is the special version of the generic pipeline formula (1) for the Maybe monad:

"Monad-formula-maybe"

“Monad-formula-maybe”

Here is the minimal code to get a functional Maybe monad (for a more detailed exposition of code and explanations see [AA7]):

MaybeUnitQ[x_] := MatchQ[x, None] || MatchQ[x, Maybe[___]];

MaybeUnit[None] := None;
MaybeUnit[x_] := Maybe[x];

MaybeBind[None, f_] := None;
MaybeBind[Maybe[x_], f_] := 
  Block[{res = f[x]}, If[FreeQ[res, None], res, None]];

MaybeEcho[x_] := Maybe@Echo[x];
MaybeEchoFunction[f___][x_] := Maybe@EchoFunction[f][x];

MaybeOption[f_][xs_] := 
  Block[{res = f[xs]}, If[FreeQ[res, None], res, Maybe@xs]];

In order to make the pipeline form of the code we write let us give definitions to a suitable infix operator (like “⟹”) to use MaybeBind:

DoubleLongRightArrow[x_?MaybeUnitQ, f_] := MaybeBind[x, f];
DoubleLongRightArrow[x_, y_, z__] := 
  DoubleLongRightArrow[DoubleLongRightArrow[x, y], z];

Here is an example of a Maybe monad pipeline using the definitions so far:

data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};

MaybeUnit[data]⟹(* lift data into the monad *)
 (Maybe@ Join[#, RandomInteger[8, 3]] &)⟹(* add more values *)
 MaybeEcho⟹(* display current value *)
 (Maybe @ Map[If[# < 0.4, None, #] &, #] &)(* map values that are too small to None *)

(* {0.61,0.48,0.92,0.9,0.32,0.11,4,4,0}
 None *)

The result is None because:

  1. the data has a number that is too small, and
  2. the definition of MaybeBind stops the pipeline aggressively using a FreeQ[_,None] test.

Monad laws verification

Let us convince ourselves that the current definition of MaybeBind gives a monad.

The verification is straightforward to program and shows that the implemented Maybe monad adheres to the monad laws.

"Monad-laws-table-Maybe"

“Monad-laws-table-Maybe”

Extensions with polymorphic behavior

We can see from formulas (1) and (2) that the monad codes can be easily extended through overloading the pipeline functions.

For example the extension of the Maybe monad to handle of Dataset objects is fairly easy and straightforward.

Here is the formula of the Maybe monad pipeline extended with Dataset objects:

Here is an example of a polymorphic function definition for the Maybe monad:

MaybeFilter[filterFunc_][xs_] := Maybe@Select[xs, filterFunc[#] &];

MaybeFilter[critFunc_][xs_Dataset] := Maybe@xs[Select[critFunc]];

See [AA7] for more detailed examples of polymorphism in monadic programming with Mathematica / WL.

A complete discussion can be found in [H3]. (The main message of [H3] is the poly-functional and polymorphic properties of monad implementations.)

Polymorphic monads in R’s dplyr

The R package dplyr, [R1], has implementations centered around monadic polymorphic behavior. The command pipelines based on dplyrcan work on R data frames, SQL tables, and Spark data frames without changes.

Here is a diagram of a typical work-flow with dplyr:

"dplyr-pipeline"

The diagram shows how a pipeline made with dplyr can be re-run (or reused) for data stored in different data structures.

Monad code generation

We can see monad code definitions like the ones for Maybe as some sort of initial templates for monads that can be extended in specific ways depending on their applications. Mathematica / WL can easily provide code generation for such templates; (see [WL1]). As it was mentioned in the introduction, we do not deal with types for monads explicitly, we generate code for monads instead.

In this section are given examples with packages that generate monad codes. The case study sections have examples of packages that utilize generated monad codes.

Maybe monads code generation

The package [AA2] provides a Maybe code generator that takes as an argument a prefix for the generated functions. (Monad code generation is discussed further in the section “General work-flow of monad code generation utilization”.)

Here is an example:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MaybeMonadCodeGenerator.m"]

GenerateMaybeMonadCode["AnotherMaybe"]

data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};

AnotherMaybeUnit[data]⟹(* lift data into the monad *)
 (AnotherMaybe@Join[#, RandomInteger[8, 3]] &)⟹(* add more values *)
 AnotherMaybeEcho⟹(* display current value *)
 (AnotherMaybe @ Map[If[# < 0.4, None, #] &, #] &)(* map values that are too small to None *)

(* {0.61,0.48,0.92,0.9,0.32,0.11,8,7,6}
   AnotherMaybeBind: Failure when applying: Function[AnotherMaybe[Map[Function[If[Less[Slot[1], 0.4], None, Slot[1]]], Slot[1]]]]
   None *)

We see that we get the same result as above (None) and a message prompting failure.

State monads code generation

The State monad is also basic and its programming in Mathematica / WL is not that difficult. (See [AA3].)

Here is the special version of the generic pipeline formula (1) for the State monad:

"Monad-formula-State"

“Monad-formula-State”

Note that since the State monad pipeline caries both a value and a state, it is a good idea to have functions that manipulate them separately. For example, we can have functions for context modification and context retrieval. (These are done in [AA3].)

This loads the package [AA3]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/StateMonadCodeGenerator.m"]

This generates the State monad for the prefix “StMon”:

GenerateStateMonadCode["StMon"]

The following StMon pipeline code starts with a random matrix and then replaces numbers in the current pipeline value according to a threshold parameter kept in the context. Several times are invoked functions for context deposit and retrieval.

SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, {3, 2}], <|"mark" -> "TooSmall", "threshold" -> 0.5|>]⟹
  StMonEchoValue⟹
  StMonEchoContext⟹
  StMonAddToContext["data"]⟹
  StMonEchoContext⟹
  (StMon[#1 /. (x_ /; x < #2["threshold"] :> #2["mark"]), #2] &)⟹
  StMonEchoValue⟹
  StMonRetrieveFromContext["data"]⟹
  StMonEchoValue⟹
  StMonRetrieveFromContext["mark"]⟹
  StMonEchoValue;

(* value: {{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}
   context: <|mark->TooSmall,threshold->0.5|>
   context: <|mark->TooSmall,threshold->0.5,data->{{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}|>
   value: {{0.789884,0.831468},{TooSmall,0.50537},{TooSmall,TooSmall}}
   value: {{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}
   value: TooSmall *)

Flow control in monads

We can implement dedicated functions for governing the pipeline flow in a monad.

Let us look at a breakdown of these kind of functions using the State monad StMon generated above.

Optional acceptance of a function result

A basic and simple pipeline control function is for optional acceptance of result — if failure is obtained applying f then we ignore its result (and keep the current pipeline value.)

Here is an example with StMonOption :

SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, 5]]⟹
 StMonEchoValue⟹
 StMonOption[If[# < 0.3, None] & /@ # &]⟹
 StMonEchoValue

(* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
   value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
   StMon[{0.789884, 0.831468, 0.421298, 0.50537, 0.0375957}, <||>] *)

Without StMonOption we get failure:

SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, 5]]⟹
 StMonEchoValue⟹
 (If[# < 0.3, None] & /@ # &)⟹
 StMonEchoValue

(* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
   StMonBind: Failure when applying: Function[Map[Function[If[Less[Slot[1], 0.3], None]], Slot[1]]]
   None *)

Conditional execution of functions

It is natural to want to have the ability to chose a pipeline function application based on a condition.

This can be done with the functions StMonIfElse and StMonWhen.

SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, 5]]⟹
 StMonEchoValue⟹
 StMonIfElse[
  Or @@ (# < 0.4 & /@ #) &,
  (Echo["A too small value is present.", "warning:"]; 
    StMon[Style[#1, Red], #2]) &,
  StMon[Style[#1, Blue], #2] &]⟹
 StMonEchoValue

 (* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
    warning: A too small value is present.
    value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
    StMon[{0.789884,0.831468,0.421298,0.50537,0.0375957},<||>] *)

Remark: Using flow control functions like StMonIfElse and StMonWhen with appropriate messages is a better way of handling computations that might fail. The silent failures handling of the basic Maybe monad is convenient only in a small number of use cases.

Iterative functions

The last group of pipeline flow control functions we consider comprises iterative functions that provide the functionalities of Nest, NestWhile, FoldList, etc.

In [AA3] these functionalities are provided through the function StMonIterate.

Here is a basic example using Nest that corresponds to Nest[#+1&,1,3]:

StMonUnit[1]⟹StMonIterate[Nest, (StMon[#1 + 1, #2]) &, 3]

(* StMon[4, <||>] *)

Consider this command that uses the full signature of NestWhileList:

NestWhileList[# + 1 &, 1, # < 10 &, 1, 4]

(* {1, 2, 3, 4, 5} *)

Here is the corresponding StMon iteration code:

StMonUnit[1]⟹StMonIterate[NestWhileList, (StMon[#1 + 1, #2]) &, (#[[1]] < 10) &, 1, 4]

(* StMon[{1, 2, 3, 4, 5}, <||>] *)

Here is another results accumulation example with FixedPointList :

StMonUnit[1.]⟹
 StMonIterate[FixedPointList, (StMon[(#1 + 2/#1)/2, #2]) &]

(* StMon[{1., 1.5, 1.41667, 1.41422, 1.41421, 1.41421, 1.41421}, <||>] *)

When the functions NestList, NestWhileList, FixedPointList are used with StMonIterate their results can be stored in the context. Here is an example:

StMonUnit[1.]⟹
 StMonIterate[FixedPointList, (StMon[(#1 + 2/#1)/2, #2]) &, "fpData"]

(* StMon[{1., 1.5, 1.41667, 1.41422, 1.41421, 1.41421, 1.41421}, <|"fpData" -> {StMon[1., <||>], 
    StMon[1.5, <||>], StMon[1.41667, <||>], StMon[1.41422, <||>], StMon[1.41421, <||>], 
    StMon[1.41421, <||>], StMon[1.41421, <||>]} |>] *)

More elaborate tests can be found in [AA8].

Partial pipelines

Because of the associativity law we can design pipeline flows based on functions made of “sub-pipelines.”

fEcho = Function[{x, ct}, StMonUnit[x, ct]⟹StMonEchoValue⟹StMonEchoContext];

fDIter = Function[{x, ct}, 
   StMonUnit[y^x, ct]⟹StMonIterate[FixedPointList, StMonUnit@D[#, y] &, 20]];

StMonUnit[7]⟹fEcho⟹fDIter⟹fEcho;

(*
  value: 7
  context: <||>
  value: {y^7,7 y^6,42 y^5,210 y^4,840 y^3,2520 y^2,5040 y,5040,0,0}
  context: <||> *)

General work-flow of monad code generation utilization

With the abilities to generate and utilize monad codes it is natural to consider the following work flow. (Also shown in the diagram below.)

  1. Come up with an idea that can be expressed with monadic programming.
  2. Look for suitable monad implementation.
  3. If there is no such implementation, make one (or two, or five.)
  4. Having a suitable monad implementation, generate the monad code.
  5. Implement additional pipeline functions addressing envisioned use cases.
  6. Start making pipelines for the problem domain of interest.
  7. Are the pipelines are satisfactory? If not go to 5. (Or 2.)

"make-monads"

Monad templates

The template nature of the general monads can be exemplified with the group of functions in the package StateMonadCodeGenerator.m, [4].

They are in five groups:

  1. base monad functions (unit testing, binding),
  2. display of the value and context,
  3. context manipulation (deposit, retrieval, modification),
  4. flow governing (optional new value, conditional function application, iteration),
  5. other convenience functions.

We can say that all monad implementations will have their own versions of these groups of functions. The more specialized monads will have functions specific to their intended use. Such special monads are discussed in the case study sections.

Software design with monadic programming

The application of monadic programming to a particular problem domain is very similar to designing a software framework or designing and implementing a Domain Specific Language (DSL).

The answers of the question “When to use monadic programming?” can form a large list. This section provides only a couple of general, personal viewpoints on monadic programming in software design and architecture. The principles of monadic programming can be used to build systems from scratch (like Haskell and Scala.) Here we discuss making specialized software with or within already existing systems.

Framework design

Software framework design is about architectural solutions that capture the commonality and variability in a problem domain in such a way that: 1) significant speed-up can be achieved when making new applications, and 2) a set of policies can be imposed on the new applications.

The rigidness of the framework provides and supports its flexibility — the framework has a backbone of rigid parts and a set of “hot spots” where new functionalities are plugged-in.

Usually Object-Oriented Programming (OOP) frameworks provide inversion of control — the general work-flow is already established, only parts of it are changed. (This is characterized with “leave the driving to us” and “don’t call us we will call you.”)

The point of utilizing monadic programming is to be able to easily create different new work-flows that share certain features. (The end user is the driver, on certain rail paths.)

In my opinion making a software framework of small to moderate size with monadic programming principles would produce a library of functions each with polymorphic behaviour that can be easily sequenced in monadic pipelines. This can be contrasted with OOP framework design in which we are more likely to end up with backbone structures that (i) are static and tree-like, and (ii) are extended or specialized by plugging-in relevant objects. (Those plugged-in objects themselves can be trees, but hopefully short ones.)

DSL development

Given a problem domain the general monad structure can be used to shape and guide the development of DSLs for that problem domain.

Generally, in order to make a DSL we have to choose the language syntax and grammar. Using monadic programming the syntax and grammar commands are clear. (The monad pipelines are the commands.) What is left is “just” the choice of particular functions and their implementations.

Another way to develop such a DSL is through a grammar of natural language commands. Generally speaking, just designing the grammar — without developing the corresponding interpreters — would be very helpful in figuring out the components at play. Monadic programming meshes very well with this approach and applying the two approaches together can be very fruitful.

Contextual monad classification (case study)

In this section we show an extension of the State monad into a monad aimed at machine learning classification work-flows.

Motivation

We want to provide a DSL for doing machine learning classification tasks that allows us:

  1. to do basic summarization and visualization of the data,
  2. to control splitting of the data into training and testing sets;
  3. to apply the built-in classifiers;
  4. to apply classifier ensembles (see [AA9] and [AA10]);
  5. to evaluate classifier performances with standard measures and
  6. ROC plots.

Also, we want the DSL design to provide clear directions how to add (hook-up or plug-in) new functionalities.

The package [AA4] discussed below provides such a DSL through monadic programming.

Package and data loading

This loads the package [AA4]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicContextualClassification.m"]

This gets some test data (the Titanic dataset):

dataName = "Titanic";
ds = Dataset[Flatten@*List @@@ ExampleData[{"MachineLearning", dataName}, "Data"]];
varNames = Flatten[List @@ ExampleData[{"MachineLearning", dataName}, "VariableDescriptions"]];
varNames = StringReplace[varNames, "passenger" ~~ (WhitespaceCharacter ..) -> ""];
If[dataName == "FisherIris", varNames = Most[varNames]];
ds = ds[All, AssociationThread[varNames -> #] &];

Monad design

The package [AA4] provides functions for the monad ClCon — the functions implemented in [AA4] have the prefix “ClCon”.

The classifier contexts are Association objects. The pipeline values can have the form:

ClCon[ val, context:(_String|_Association) ]

The ClCon specific monad functions deposit or retrieve values from the context with the keys: “trainData”, “testData”, “classifier”. The general idea is that if the current value of the pipeline cannot provide all arguments for a ClCon function, then the needed arguments are taken from the context. If that fails, then an message is issued. This is illustrated with the following pipeline with comments example.

"ClCon-basic-example"

The pipeline and results above demonstrate polymorphic behaviour over the classifier variable in the context: different functions are used if that variable is a ClassifierFunction object or an association of named ClassifierFunction objects.

Note the demonstrated granularity and sequentiality of the operations coming from using a monad structure. With those kind of operations it would be easy to make interpreters for natural language DSLs.

Another usage example

This monadic pipeline in this example goes through several stages: data summary, classifier training, evaluation, acceptance test, and if the results are rejected a new classifier is made with a different algorithm using the same data splitting. The context keeps track of the data and its splitting. That allows the conditional classifier switch to be concisely specified.

First let us define a function that takes a Classify method as an argument and makes a classifier and calculates performance measures.

ClSubPipe[method_String] :=
  Function[{x, ct},
   ClConUnit[x, ct]⟹
    ClConMakeClassifier[method]⟹
    ClConEchoFunctionContext["classifier:", 
     ClassifierInformation[#["classifier"], Method] &]⟹
    ClConEchoFunctionContext["training time:", ClassifierInformation[#["classifier"], "TrainingTime"] &]⟹
    ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall"}]⟹
    ClConEchoValue⟹
    ClConEchoFunctionContext[
     ClassifierMeasurements[#["classifier"], 
     ClConToNormalClassifierData[#["testData"]], "ROCCurve"] &]
   ];

Using the sub-pipeline function ClSubPipe we make the outlined pipeline.

SeedRandom[12]
res =
  ClConUnit[ds]⟹
   ClConSplitData[0.7]⟹
   ClConEchoFunctionValue["summaries:", ColumnForm[Normal[RecordsSummary /@ #]] &]⟹
   ClConEchoFunctionValue["xtabs:", 
    MatrixForm[CrossTensorate[Count == varNames[[1]] + varNames[[-1]], #]] & /@ # &]⟹
   ClSubPipe["LogisticRegression"]⟹
   (If[#1["Accuracy"] > 0.8,
      Echo["Good accuracy!", "Success:"]; ClConFail,
      Echo["Make a new classifier", "Inaccurate:"]; 
      ClConUnit[#1, #2]] &)⟹
   ClSubPipe["RandomForest"];

"ClCon-pipeline-2-output"

Tracing monad pipelines (case study)

The monadic implementations in the package MonadicTracing.m, [AA5] allow tracking of the pipeline execution of functions within other monads.

The primary reason for developing the package was the desire to have the ability to print a tabulated trace of code and comments using the usual monad pipeline notation. (I.e. without conversion to strings etc.)

It turned out that by programming MonadicTracing.m I came up with a monad transformer; see [Wk2], [H2].

Package loading

This loads the package [AA5]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

Usage example

This generates a Maybe monad to be used in the example (for the prefix “Perhaps”):

GenerateMaybeMonadCode["Perhaps"]
GenerateMaybeMonadSpecialCode["Perhaps"]

In following example we can see that pipeline functions of the Perhaps monad are interleaved with comment strings. Producing the grid of functions and comments happens “naturally” with the monad function TraceMonadEchoGrid.

data = RandomInteger[10, 15];

TraceMonadUnit[PerhapsUnit[data]]⟹"lift to monad"⟹
  TraceMonadEchoContext⟹
  PerhapsFilter[# > 3 &]⟹"filter current value"⟹
  PerhapsEcho⟹"display current value"⟹
  PerhapsWhen[#[[3]] > 3 &, 
   PerhapsEchoFunction[Style[#, Red] &]]⟹
  (Perhaps[#/4] &)⟹
  PerhapsEcho⟹"display current value again"⟹
  TraceMonadEchoGrid[Grid[#, Alignment -> Left] &];

Note that :

  1. the tracing is initiated by just using TraceMonadUnit;
  2. pipeline functions (actual code) and comments are interleaved;
  3. putting a comment string after a pipeline function is optional.

Another example is the ClCon pipeline in the sub-section “Monad design” in the previous section.

Summary

This document presents a style of using monadic programming in Wolfram Language (Mathematica). The style has some shortcomings, but it definitely provides convenient features for day-to-day programming and in coming up with architectural designs.

The style is based on WL’s basic language features. As a consequence it is fairly concise and produces light overhead.

Ideally, the packages for the code generation of the basic Maybe and State monads would serve as starting points for other more general or more specialized monadic programs.

References

Monadic programming

[Wk1] Wikipedia entry: Monad (functional programming), URL: https://en.wikipedia.org/wiki/Monad_(functional_programming) .

[Wk2] Wikipedia entry: Monad transformer, URL: https://en.wikipedia.org/wiki/Monad_transformer .

[Wk3] Wikipedia entry: Software Design Pattern, URL: https://en.wikipedia.org/wiki/Software_design_pattern .

[H1] Haskell.org article: Monad laws, URL: https://wiki.haskell.org/Monad_laws.

[H2] Sheng Liang, Paul Hudak, Mark Jones, “Monad transformers and modular interpreters”, (1995), Proceedings of the 22nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages. New York, NY: ACM. pp. 333[Dash]343. doi:10.1145/199448.199528.

[H3] Philip Wadler, “The essence of functional programming”, (1992), 19’th Annual Symposium on Principles of Programming Languages, Albuquerque, New Mexico, January 1992.

R

[R1] Hadley Wickham et al., dplyr: A Grammar of Data Manipulation, (2014), tidyverse at GitHub, URL: https://github.com/tidyverse/dplyr . (See also, http://dplyr.tidyverse.org .)

Mathematica / Wolfram Language

[WL1] Leonid Shifrin, “Metaprogramming in Wolfram Language”, (2012), Mathematica StackExchange. (Also posted at Wolfram Community in 2017.) URL of the Mathematica StackExchange answer: https://mathematica.stackexchange.com/a/2352/34008 . URL of the Wolfram Community post: http://community.wolfram.com/groups/-/m/t/1121273 .

MathematicaForPrediction

[AA1] Anton Antonov, “Implementation of Object-Oriented Programming Design Patterns in Mathematica”, (2016) MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction.

[AA2] Anton Antonov, Maybe monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MaybeMonadCodeGenerator.m .

[AA3] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

[AA4] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

[AA5] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

[AA6] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

[AA7] Anton Antonov, Simple monadic programming, (2017), MathematicaForPrediction at GitHub. (Preliminary version, 40% done.) URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/Documentation/Simple-monadic-programming.pdf .

[AA8] Anton Antonov, Generated State Monad Mathematica unit tests, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/GeneratedStateMonadTests.m .

[AA9] Anton Antonov, Classifier ensembles functions Mathematica package, (2016), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m .

[AA10] Anton Antonov, “ROC for classifier ensembles, bootstrapping, damaging, and interpolation”, (2016), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/ .

Text analysis of Trump tweets

Introduction

This post is to proclaim the MathematicaVsR at GitHub project “Text analysis of Trump tweets” in which we compare Mathematica and R over text analyses of Twitter messages made by Donald Trump (and his staff) before the USA president elections in 2016.

The project follows and extends the exposition and analysis of the R-based blog post "Text analysis of Trump’s tweets confirms he writes only the (angrier) Android half" by David Robinson at VarianceExplained.org; see [1].

The blog post [1] links to several sources that claim that during the election campaign Donald Trump tweeted from his Android phone and his campaign staff tweeted from an iPhone. The blog post [1] examines this hypothesis in a quantitative way (using various R packages.)

The hypothesis in question is well summarized with the tweet:

Every non-hyperbolic tweet is from iPhone (his staff).
Every hyperbolic tweet is from Android (from him). pic.twitter.com/GWr6D8h5ed
— Todd Vaziri (@tvaziri) August 6, 2016

This conjecture is fairly well supported by the following mosaic plots, [2]:

TextAnalysisOfTrumpTweets-iPhone-MosaicPlot-Sentiment-Device TextAnalysisOfTrumpTweets-iPhone-MosaicPlot-Device-Weekday-Sentiment

We can see the that Twitter messages from iPhone are much more likely to be neutral, and the ones from Android are much more polarized. As Christian Rudder (one of the founders of OkCupid, a dating website) explains in the chapter "Death by a Thousand Mehs" of the book "Dataclysm", [3], having a polarizing image (online persona) is a very good strategy to engage online audience:

[…] And the effect isn’t small — being highly polarizing will in fact get you about 70 percent more messages. That means variance allows you to effectively jump several "leagues" up in the dating pecking order — […]

(The mosaic plots above were made for the Mathematica-part of this project. Mosaic plots and weekday tags are not used in [1].)

Concrete steps

The Mathematica-part of this project does not follow closely the blog post [1]. After the ingestion of the data provided in [1], the Mathematica-part applies alternative algorithms to support and extend the analysis in [1].

The sections in the R-part notebook correspond to some — not all — of the sections in the Mathematica-part.

The following list of steps is for the Mathematica-part.

  1. Data ingestion
    • The blog post [1] shows how to do in R the ingestion of Twitter data of Donald Trump messages.

    • That can be done in Mathematica too using the built-in function ServiceConnect, but that is not necessary since [1] provides a link to the ingested data used [1]:
      load(url("http://varianceexplained.org/files/trump_tweets_df.rda&quot;))

    • Which leads to the ingesting of an R data frame in the Mathematica-part using RLink.

  2. Adding tags

    • We have to extract device tags for the messages — each message is associated with one of the tags "Android", "iPad", or "iPhone".

    • Using the message time-stamps each message is associated with time tags corresponding to the creation time month, hour, weekday, etc.

    • Here is summary of the data at this stage:

    "trumpTweetsTbl-Summary"

  3. Time series and time related distributions

    • We can make several types of time series plots for general insight and to support the main conjecture.

    • Here is a Mathematica made plot for the same statistic computed in [1] that shows differences in tweet posting behavior:

    "TimeSeries"

    • Here are distributions plots of tweets per weekday:

    "ViolinPlots"

  4. Classification into sentiments and Facebook topics

    • Using the built-in classifiers of Mathematica each tweet message is associated with a sentiment tag and a Facebook topic tag.

    • In [1] the results of this step are derived in several stages.

    • Here is a mosaic plot for conditional probabilities of devices, topics, and sentiments:

    "Device-Topic-Sentiment-MosaicPlot"

  5. Device-word association rules

    • Using Association rule learning device tags are associated with words in the tweets.

    • In the Mathematica-part these associations rules are not needed for the sentiment analysis (because of the built-in classifiers.)

    • The association rule mining is done mostly to support and extend the text analysis in [1] and, of course, for comparison purposes.

    • Here is an example of derived association rules together with their most important measures:

    "iPhone-Association-Rules"

In [1] the sentiments are derived from computed device-word associations, so in [1] the order of steps is 1-2-3-5-4. In Mathematica we do not need the steps 3 and 5 in order to get the sentiments in the 4th step.

Comparison

Using Mathematica for sentiment analysis is much more direct because of the built-in classifiers.

The R-based blog post [1] uses heavily the "pipeline" operator %>% which is kind of a recent addition to R (and it is both fashionable and convenient to use it.) In Mathematica the related operators are Postfix (//), Prefix (@), Infix (~~), Composition (@*), and RightComposition (/*).

Making the time series plots with the R package "ggplot2" requires making special data frames. I am inclined to think that the Mathematica plotting of time series is more direct, but for this task the data wrangling codes in Mathematica and R are fairly comparable.

Generally speaking, the R package "arules" — used in this project for Associations rule learning — is somewhat awkward to use:

  • it is data frame centric, does not work directly with lists of lists, and

  • requires the use of factors.

The Apriori implementation in “arules” is much faster than the one in “AprioriAlgorithm.m” — “arules” uses a more efficient algorithm implemented in C.

References

[1] David Robinson, "Text analysis of Trump’s tweets confirms he writes only the (angrier) Android half", (2016), VarianceExplained.org.

[2] Anton Antonov, "Mosaic plots for data visualization", (2014), MathematicaForPrediction at WordPress.

[3] Christian Rudder, Dataclysm, Crown, 2014. ASIN: B00J1IQUX8 .

Handwritten digits recognition by matrix factorization

Introduction

This MathematicaVsR at GitHub project is for comparing Mathematica and R for the tasks of classifier creation, execution, and evaluation using the MNIST database of images of handwritten digits.

Here are the bases built with two different classifiers:

  • Singular Value Decomposition (SVD)

SVD-basis-for-5

  • Non-Negative Matrix Factorization (NNMF)

NNMF-basis-for-5

Here are the confusion matrices of the two classifiers:

  • SVD

SVD-confusion-matrix

  • NNMF

NNMF-confusion-matrix

The blog post "Classification of handwritten digits" (published 2013) has a related more elaborated discussion over a much smaller database of handwritten digits.

Concrete steps

The concrete steps taken in scripts and documents of this project follow.

  1. Ingest the binary data files into arrays that can be visualized as digit images.
  • We have two sets: 60,000 training images and 10,000 testing images.
  1. Make a linear vector space representation of the images by simple unfolding.

  2. For each digit find the corresponding representation matrix and factorize it.

  3. Store the matrix factorization results in a suitable data structure. (These results comprise the classifier training.)

  • One of the matrix factors is seen as a new basis.
  1. For a given test image (and its linear vector space representation) find the basis that approximates it best. The corresponding digit is the classifier prediction for the given test image.

  2. Evaluate the classifier(s) over all test images and compute accuracy, F-Scores, and other measures.

Scripts

There are scripts going through the steps listed above:

Documents

The following documents give expositions that are suitable for reading and following of steps and corresponding results.

Observations

Ingestion

I figured out first in R how to ingest the data in the binary files of the MNIST database. There were at least several online resources (blog posts, GitHub repositories) that discuss the MNIST binary files ingestion.

After that making the corresponding code in Mathematica was easy.

Classification results

Same in Mathematica and R for for SVD and NNMF. (As expected.)

NNMF

NNMF classifiers use the MathematicaForPrediction at GitHub implementations: NonNegativeMatrixFactorization.m and NonNegativeMatrixFactorization.R.

Parallel computations

Both Mathematica and R have relatively simple set-up of parallel computations.

Graphics

It was not very straightforward to come up in R with visualizations for MNIST images. The Mathematica visualization is much more flexible when it comes to plot labeling.

Going further

Comparison with other classifiers

Using Mathematica’s built-in classifiers it was easy to compare the SVD and NNMF classifiers with neural network ones and others. (The SVD and NNMF are much faster to built and they bring comparable precision.)

It would be nice to repeat that in R using one or several of the neural network classifiers provided by Google, Microsoft, H2O, Baidu, etc.

Classifier ensembles

Another possible extension is to use classifier ensembles and Receiver Operation Characteristic (ROC) to create better classifiers. (Both in Mathematica and R.)

Importance of variables

Using classifier agnostic importance of variables procedure we can figure out :

  • which NNMF basis vectors (images) are most important for the classification precision,

  • which image rows or columns are most important for each digit, or similarly

  • which image squares of a, say, 4×4 image grid are most important.