A monad for Latent Semantic Analysis workflows

Introduction

In this document we describe the design and implementation of a (software programming) monad, [Wk1], for Latent Semantic Analysis workflows specification and execution. The design and implementation are done with Mathematica / Wolfram Language (WL).

What is Latent Semantic Analysis (LSA)? : A statistical method (or a technique) for finding relationships in natural language texts that is based on the so called Distributional hypothesis, [Wk2, Wk3]. (The Distributional hypothesis can be simply stated as “linguistic items with similar distributions have similar meanings”; for an insightful philosophical and scientific discussion see [MS1].) LSA can be seen as the application of Dimensionality reduction techniques over matrices derived with the Vector space model.

The goal of the monad design is to make the specification of LSA workflows (relatively) easy and straightforward by following a certain main scenario and specifying variations over that scenario.

The monad is named LSAMon and it is based on the State monad package “StateMonadCodeGenerator.m”, [AAp1, AA1], the document-term matrix making package “DocumentTermMatrixConstruction.m”, [AAp4, AA2], the Non-Negative Matrix Factorization (NNMF) package “NonNegativeMatrixFactorization.m”, [AAp5, AA2], and the package “SSparseMatrix.m”, [AAp2, AA5], that provides matrix objects with named rows and columns.

The data for this document is obtained from WL’s repository and it is manipulated into a certain ready-to-utilize form (and uploaded to GitHub.)

The monadic programming design is used as a Software Design Pattern. The LSAMon monad can be also seen as a Domain Specific Language (DSL) for the specification and programming of machine learning classification workflows.

Here is an example of using the LSAMon monad over a collection of documents that consists of 233 US state of union speeches.

LSAMon-Introduction-pipeline
LSAMon-Introduction-pipeline
LSAMon-Introduction-pipeline-echos
LSAMon-Introduction-pipeline-echos

The table above is produced with the package “MonadicTracing.m”, [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad LSAMon can be seen as a DSL. Because of this the monad pipelines made with LSAMon are sometimes called “specifications”.

Remark: In this document with “term” we mean “a word, a word stem, or other type of token.”

Remark: LSA and Latent Semantic Indexing (LSI) are considered more or less to be synonyms. I think that “latent semantic analysis” sounds more universal and that “latent semantic indexing” as a name refers to a specific Information Retrieval technique. Below we refer to “LSI functions” like “IDF” and “TF-IDF” that are applied within the generic LSA workflow.

Contents description

The document has the following structure.

  • The sections “Package load” and “Data load” obtain the needed code and data.
  • The sections “Design consideration” and “Monad design” provide motivation and design decisions rationale.

  • The sections “LSAMon overview”, “Monad elements”, and “The utilization of SSparseMatrix objects” provide technical descriptions needed to utilize the LSAMon monad .

    • (Using a fair amount of examples.)
  • The section “Unit tests” describes the tests used in the development of the LSAMon monad.
    • (The random pipelines unit tests are especially interesting.)
  • The section “Future plans” outlines future directions of development.
  • The section “Implementation notes” just says that LSAMon’s development process and this document follow the ones of the classifications workflows monad ClCon, [AA6].

Remark: One can read only the sections “Introduction”, “Design consideration”, “Monad design”, and “LSAMon overview”. That set of sections provide a fairly good, programming language agnostic exposition of the substance and novel ideas of this document.

Package load

The following commands load the packages [AAp1–AAp7]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicLatentSemanticAnalysis.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

Data load

In this section we load data that is used in the rest of the document. The text data was obtained through WL’s repository, transformed in a certain more convenient form, and uploaded to GitHub.

The text summarization and plots are done through LSAMon, which in turn uses the function RecordsSummary from the package “MathematicaForPredictionUtilities.m”, [AAp7].

Hamlet

textHamlet = 
  ToString /@ 
   Flatten[Import["https://raw.githubusercontent.com/antononcube/MathematicaVsR/master/Data/MathematicaVsR-Data-Hamlet.csv"]];

TakeLargestBy[
 Tally[DeleteStopwords[ToLowerCase[Flatten[TextWords /@ textHamlet]]]], #[[2]] &, 20]

(* {{"ham", 358}, {"lord", 225}, {"king", 196}, {"o", 124}, {"queen", 120}, 
    {"shall", 114}, {"good", 109}, {"hor", 109}, {"come",  107}, {"hamlet", 107}, 
    {"thou", 105}, {"let", 96}, {"thy", 86}, {"pol", 86}, {"like", 81}, {"sir", 75}, 
    {"'t", 75}, {"know", 74}, {"enter", 73}, {"th", 72}} *)

LSAMonUnit[textHamlet]⟹LSAMonMakeDocumentTermMatrix⟹LSAMonEchoDocumentTermMatrixStatistics;
LSAMon-Data-Load-Hamlet-echo
LSAMon-Data-Load-Hamlet-echo

USA state of union speeches

url = "https://github.com/antononcube/MathematicaVsR/blob/master/Data/MathematicaVsR-Data-StateOfUnionSpeeches.JSON.zip?raw=true";
str = Import[url, "String"];
filename = First@Import[StringToStream[str], "ZIP"];
aStateOfUnionSpeeches = Association@ImportString[Import[StringToStream[str], {"ZIP", filename, "String"}], "JSON"];

lsaObj = 
LSAMonUnit[aStateOfUnionSpeeches]⟹
LSAMonMakeDocumentTermMatrix⟹
LSAMonEchoDocumentTermMatrixStatistics["LogBase" -> 10];
LSAMon-Data-Load-StateOfUnionSpeeches-echo
LSAMon-Data-Load-StateOfUnionSpeeches-echo
TakeLargest[ColumnSumsAssociation[lsaObj⟹LSAMonTakeDocumentTermMatrix], 12]

(* <|"government" -> 7106, "states" -> 6502, "congress" -> 5023, 
     "united" -> 4847, "people" -> 4103, "year" -> 4022, 
     "country" -> 3469, "great" -> 3276, "public" -> 3094, "new" -> 3022, 
     "000" -> 2960, "time" -> 2922|> *)

Stop words

In some of the examples below we want to explicitly specify the stop words. Here are stop words derived using the built-in functions DictionaryLookup and DeleteStopwords.

stopWords = Complement[DictionaryLookup["*"], DeleteStopwords[DictionaryLookup["*"]]];

Short[stopWords]

(* {"a", "about", "above", "across", "add-on", "after", "again", <<290>>, 
   "you'll", "your", "you're", "yours", "yourself", "yourselves", "you've" } *)
    

Design considerations

The steps of the main LSA workflow addressed in this document follow.

  1. Get a collection of documents with associated ID’s.

  2. Create a document-term matrix.

    1. Here we apply the Bag-or-words model and Vector space model.
      1. The sequential order of the words is ignored and each document is represented as a point in a multi-dimensional vector space.

      2. That vector space axes correspond to the unique words found in the whole document collection.

    2. Consider the application of stemming rules.

    3. Consider the removal of stop words.

  3. Apply matrix-entries weighting functions.

    1. Those functions come from LSI.

    2. Functions like “IDF”, “TF-IDF”, “GFIDF”.

  4. Extract topics.

    1. One possible statistical way of doing this is with Dimensionality reduction.

    2. We consider using Singular Value Decomposition (SVD) and Non-Negative Matrix Factorization (NNMF).

  5. Make and display the topics table.

  6. Extract and display a statistical thesaurus of selected words.

  7. Map search queries or unseen documents over the extracted topics.

  8. Find the most important documents in the document collection. (Optional.)

The following flow-chart corresponds to the list of steps above.

LSA-worflows
LSA-worflows

In order to address:

  • the introduction of new elements in LSA workflows,

  • workflows elements variability, and

  • workflows iterative changes and refining,

it is beneficial to have a DSL for LSA workflows. We choose to make such a DSL through a functional programming monad, [Wk1, AA1].

Here is a quote from [Wk1] that fairly well describes why we choose to make a classification workflow monad and hints on the desired properties of such a monad.

[…] The monad represents computations with a sequential structure: a monad defines what it means to chain operations together. This enables the programmer to build pipelines that process data in a series of steps (i.e. a series of actions applied to the data), in which each action is decorated with the additional processing rules provided by the monad. […]

Monads allow a programming style where programs are written by putting together highly composable parts, combining in flexible ways the possible actions that can work on a particular type of data. […]

Remark: Note that quote from [Wk1] refers to chained monadic operations as “pipelines”. We use the terms “monad pipeline” and “pipeline” below.

Monad design

The monad we consider is designed to speed-up the programming of LSA workflows outlined in the previous section. The monad is named LSAMon for “Latent Semantic Analysis** Mon**ad”.

We want to be able to construct monad pipelines of the general form:

LSAMon-Monad-Design-formula-1
LSAMon-Monad-Design-formula-1

LSAMon is based on the State monad, [Wk1, AA1], so the monad pipeline form (1) has the following more specific form:

LSAMon-Monad-Design-formula-2
LSAMon-Monad-Design-formula-2

This means that some monad operations will not just change the pipeline value but they will also change the pipeline context.

In the monad pipelines of LSAMon we store different objects in the contexts for at least one of the following two reasons.

  1. The object will be needed later on in the pipeline, or

  2. The object is (relatively) hard to compute.

Such objects are document-term matrix, Dimensionality reduction factors and the related topics.

Let us list the desired properties of the monad.

  • Rapid specification of non-trivial LSA workflows.

  • The monad works with associations with string values, list of strings.

  • The monad use the Linear vector spaces model .

  • The document-term frequency matrix is can be created after removing stop words and/or word stemming.

  • It is easy to specify and apply different LSI weight functions. (Like “IDF” or “GFIDF”.)

  • The monad can do dimension reduction with SVD and NNMF and corresponding matrix factors are retrievable with monad functions.

  • Documents (or query strings) external to the monad a easily mapped into monad’s Linear vector space of terms and the Linear vector space of topics.

  • The monad allows of cursory examination and summarization of the data.

  • The pipeline values can be of different types. Most monad functions modify the pipeline value; some modify the context; some just echo results.

  • It is easy to obtain the pipeline value, context, and different context objects for manipulation outside of the monad.

  • It is easy to tabulate extracted topics and related statistical thesauri.

  • It is easy to specify and apply re-weighting functions for the entries of the document-term contingency matrices.

The LSAMon components and their interactions are fairly simple.

The main LSAMon operations implicitly put in the context or utilize from the context the following objects:

  • document-term matrix,

  • the factors obtained by matrix factorization algorithms,

  • extracted topics.

Note the that the monadic set of types of LSAMon pipeline values is fairly heterogenous and certain awareness of “the current pipeline value” is assumed when composing LSAMon pipelines.

Obviously, we can put in the context any object through the generic operations of the State monad of the package “StateMonadGenerator.m”, [AAp1].

LSAMon overview

When using a monad we lift certain data into the “monad space”, using monad’s operations we navigate computations in that space, and at some point we take results from it.

With the approach taken in this document the “lifting” into the LSAMon monad is done with the function LSAMonUnit. Results from the monad can be obtained with the functions LSAMonTakeValue, LSAMonContext, or with the other LSAMon functions with the prefix “LSAMonTake” (see below.)

Here is a corresponding diagram of a generic computation with the LSAMon monad:

LSAMon-pipeline
LSAMon-pipeline

Remark: It is a good idea to compare the diagram with formulas (1) and (2).

Let us examine a concrete LSAMon pipeline that corresponds to the diagram above. In the following table each pipeline operation is combined together with a short explanation and the context keys after its execution.

Here is the output of the pipeline:

The LSAMon functions are separated into four groups:

  • operations,

  • setters and droppers,

  • takers,

  • State Monad generic functions.

Monad functions interaction with the pipeline value and context

An overview of the those functions is given in the tables in next two sub-sections. The next section, “Monad elements”, gives details and examples for the usage of the LSAMon operations.

LSAMon-Overview-operations-context-interactions-table
LSAMon-Overview-operations-context-interactions-table
LSAMon-Overview-setters-droppers-takers-context-interactions-table
LSAMon-Overview-setters-droppers-takers-context-interactions-table

State monad functions

Here are the LSAMon State Monad functions (generated using the prefix “LSAMon”, [AAp1, AA1].)

LSAMon-Overview-StMon-usage-descriptions-table
LSAMon-Overview-StMon-usage-descriptions-table

Main monad functions

Here are the usage descriptions of the main (not monad-supportive) LSAMon functions, which are explained in detail in the next section.

LSAMon-Overview-operations-usage-descriptions-table
LSAMon-Overview-operations-usage-descriptions-table

Monad elements

In this section we show that LSAMon has all of the properties listed in the previous section.

The monad head

The monad head is LSAMon. Anything wrapped in LSAMon can serve as monad’s pipeline value. It is better though to use the constructor LSAMonUnit. (Which adheres to the definition in [Wk1].)

LSAMon[textHamlet, <||>]⟹LSAMonMakeDocumentTermMatrix[Automatic, Automatic]⟹LSAMonEchoFunctionContext[Short];

Lifting data to the monad

The function lifting the data into the monad QRMon is QRMonUnit.

The lifting to the monad marks the beginning of the monadic pipeline. It can be done with data or without data. Examples follow.

LSAMonUnit[textHamlet]⟹LSAMonMakeDocumentTermMatrix⟹LSAMonTakeDocumentTermMatrix

LSAMonUnit[]⟹LSAMonSetDocuments[textHamlet]⟹LSAMonMakeDocumentTermMatrix⟹LSAMonTakeDocumentTermMatrix

(See the sub-section “Setters, droppers, and takers” for more details of setting and taking values in LSAMon contexts.)

Currently the monad can deal with data in the following forms:

  • vectors of strings,

  • associations with string values.

Generally, WL makes it easy to extract columns datasets order to obtain vectors or matrices, so datasets are not currently supported in LSAMon.

Making of the document-term matrix

As it was mentioned above with “term” we mean “a word or a stemmed word”. Here is are examples of stemmed words.

WordData[#, "PorterStem"] & /@ {"consequential", "constitution", "forcing", ""}

The fundamental model of LSAMon is the so called Vector space model (or the closely related Bag-of-words model.) The document-term matrix is a linear vector space representation of the documents collection. That representation is further used in LSAMon to find topics and statistical thesauri.

Here is an example of ad hoc construction of a document-term matrix using a couple of paragraphs from “Hamlet”.

inds = {10, 19};
aTempText = AssociationThread[inds, textHamlet[[inds]]]

MatrixForm @ CrossTabulate[Flatten[KeyValueMap[Thread[{#1, #2}] &, TextWords /@ ToLowerCase[aTempText]], 1]]

When we construct the document-term matrix we (often) want to stem the words and (almost always) want to remove stop words. LSAMon’s function LSAMonMakeDocumentTermMatrix makes the document-term matrix and takes specifications for stemming and stop words.

lsaObj =
  LSAMonUnit[textHamlet]⟹
   LSAMonMakeDocumentTermMatrix["StemmingRules" -> Automatic, "StopWords" -> Automatic]⟹
   LSAMonEchoFunctionContext[ MatrixPlot[#documentTermMatrix] &]⟹
   LSAMonEchoFunctionContext[TakeLargest[ColumnSumsAssociation[#documentTermMatrix], 12] &];

We can retrieve the stop words used in a monad with the function LSAMonTakeStopWords.

Short[lsaObj⟹LSAMonTakeStopWords]

We can retrieve the stemming rules used in a monad with the function LSAMonTakeStemmingRules.

Short[lsaObj⟹LSAMonTakeStemmingRules]

The specification Automatic for stemming rules uses WordData[#,"PorterStem"]&.

Instead of the options style signature we can use positional signature.

  • Options style: LSAMonMakeDocumentTermMatrix["StemmingRules" -> {}, "StopWords" -> Automatic] .

  • Positional style: LSAMonMakeDocumentTermMatrix[{}, Automatic] .

LSI weight functions

After making the document-term matrix we will most likely apply LSI weight functions, [Wk2], like “GFIDF” and “TF-IDF”. (This follows the “standard” approach used in search engines for calculating weights for document-term matrices; see [MB1].)

Frequency matrix

We use the following definition of the frequency document-term matrix F.

Each entry fij of the matrix F is the number of occurrences of the term j in the document i.

Weights

Each entry of the weighted document-term matrix M derived from the frequency document-term matrix F is expressed with the formula

where gj – global term weight; lij – local term weight; di – normalization weight.

Various formulas exist for these weights and one of the challenges is to find the right combination of them when using different document collections.

Here is a table of weight functions formulas.

LSAMon-LSI-weight-functions-table
LSAMon-LSI-weight-functions-table

Computation specifications

LSAMon function LSAMonApplyTermWeightFunctions delegates the LSI weight functions application to the package “DocumentTermMatrixConstruction.m”, [AAp4].

Here is an example.

lsaHamlet = LSAMonUnit[textHamlet]⟹LSAMonMakeDocumentTermMatrix;
wmat =
  lsaHamlet⟹
   LSAMonApplyTermWeightFunctions["IDF", "TermFrequency", "Cosine"]⟹
   LSAMonTakeWeightedDocumentTermMatrix;

TakeLargest[ColumnSumsAssociation[wmat], 6]

Instead of using the positional signature of LSAMonApplyTermWeightFunctions we can specify the LSI functions using options.

wmat2 =
  lsaHamlet⟹
   LSAMonApplyTermWeightFunctions["GlobalWeightFunction" -> "IDF", "LocalWeightFunction" -> "TermFrequency", "NormalizerFunction" -> "Cosine"]⟹
   LSAMonTakeWeightedDocumentTermMatrix;

TakeLargest[ColumnSumsAssociation[wmat2], 6]

Here we are summaries of the non-zero values of the weighted document-term matrix derived with different combinations of global, local, and normalization weight functions.

Magnify[#, 0.8] &@Multicolumn[Framed /@ #, 6] &@Flatten@
  Table[
   (wmat =
     lsaHamlet⟹
      LSAMonApplyTermWeightFunctions[gw, lw, nf]⟹
      LSAMonTakeWeightedDocumentTermMatrix;
    RecordsSummary[SparseArray[wmat]["NonzeroValues"], 
     List@StringRiffle[{gw, lw, nf}, ", "]]),
   {gw, {"IDF", "GFIDF", "Binary", "None", "ColumnStochastic"}},
   {lw, {"Binary", "Log", "None"}},
   {nf, {"Cosine", "None", "RowStochastic"}}]
AutoCollapse[]
LSAMon-LSI-weight-functions-combinations-application-table
LSAMon-LSI-weight-functions-combinations-application-table

Extracting topics

Streamlining topic extraction is one of the main reasons LSAMon was implemented. The topic extraction correspond to the so called “syntagmatic” relationships between the terms, [MS1].

Theoretical outline

The original weighed document-term matrix M is decomposed into the matrix factors W and H.

M ≈ W.H, W ∈ ℝm × k, H ∈ ℝk × n.

The i-th row of M is expressed with the i-th row of W multiplied by H.

The rows of H are the topics. SVD produces orthogonal topics; NNMF does not.

The i-the document of the collection corresponds to the i-th row W. Finding the Nearest Neighbors (NN’s) of the i-th document using the rows similarity of the matrix W gives document NN’s through topic similarity.

The terms correspond to the columns of H. Finding NN’s based on similarities of H’s columns produces statistical thesaurus entries.

The term groups provided by H’s rows correspond to “syntagmatic” relationships. Using similarities of H’s columns we can produce term clusters that correspond to “paradigmatic” relationships.

Computation specifications

Here is an example using the play “Hamlet” in which we specify additional stop words.

stopWords2 = {"enter", "exit", "[exit", "ham", "hor", "laer", "pol", "oph", "thy", "thee", "act", "scene"};

SeedRandom[2381]
lsaHamlet =
  LSAMonUnit[textHamlet]⟹
   LSAMonMakeDocumentTermMatrix["StemmingRules" -> Automatic, "StopWords" -> Join[stopWords, stopWords2]]⟹
   LSAMonApplyTermWeightFunctions["GlobalWeightFunction" -> "IDF", "LocalWeightFunction" -> "None", "NormalizerFunction" -> "Cosine"]⟹
   LSAMonExtractTopics["NumberOfTopics" -> 12, "MinNumberOfDocumentsPerTerm" -> 10, Method -> "NNMF", "MaxSteps" -> 20]⟹
   LSAMonEchoTopicsTable["NumberOfTableColumns" -> 6, "NumberOfTerms" -> 10];
LSAMon-Extracting-topics-Hamlet-topics-table
LSAMon-Extracting-topics-Hamlet-topics-table

Here is an example using the USA presidents “state of union” speeches.

SeedRandom[7681]
lsaSpeeches =
  LSAMonUnit[aStateOfUnionSpeeches]⟹
   LSAMonMakeDocumentTermMatrix["StemmingRules" -> Automatic,  "StopWords" -> Automatic]⟹
   LSAMonApplyTermWeightFunctions["GlobalWeightFunction" -> "IDF", "LocalWeightFunction" -> "None", "NormalizerFunction" -> "Cosine"]⟹
   LSAMonExtractTopics["NumberOfTopics" -> 36, "MinNumberOfDocumentsPerTerm" -> 40, Method -> "NNMF", "MaxSteps" -> 12]⟹
   LSAMonEchoTopicsTable["NumberOfTableColumns" -> 6, "NumberOfTerms" -> 10];
LSAMon-Extracting-topics-StateOfUnionSpeeches-topics-table
LSAMon-Extracting-topics-StateOfUnionSpeeches-topics-table

Note that in both examples:

  1. stemming is used when creating the document-term matrix,

  2. the default LSI re-weighting functions are used: “IDF”, “None”, “Cosine”,

  3. the dimension reduction algorithm NNMF is used.

Things to keep in mind.

  1. The interpretability provided by NNMF comes at a price.

  2. NNMF is prone to get stuck into local minima, so several topic extractions (and corresponding evaluations) have to be done.

  3. We would get different results with different NNMF runs using the same parameters. (NNMF uses random numbers initialization.)

  4. The NNMF topic vectors are not orthogonal.

  5. SVD is much faster than NNMF, but it topic vectors are hard to interpret.

  6. Generally, the topics derived with SVD are stable, they do not change with different runs with the same parameters.

  7. The SVD topics vectors are orthogonal, which provides for quick to find representations of documents not in the monad’s document collection.

The document-topic matrix W has column names that are automatically derived from the top three terms in each topic.

ColumnNames[lsaHamlet⟹LSAMonTakeW]

(* {"player-plai-welcom", "ro-lord-sir", "laert-king-attend",
    "end-inde-make", "state-room-castl", "daughter-pass-love",
    "hamlet-ghost-father", "father-thou-king",
    "rosencrantz-guildenstern-king", "ophelia-queen-poloniu",
    "answer-sir-mother", "horatio-attend-gentleman"} *)

Of course the row names of H have the same names.

RowNames[lsaHamlet⟹LSAMonTakeH]

(* {"player-plai-welcom", "ro-lord-sir", "laert-king-attend",
    "end-inde-make", "state-room-castl", "daughter-pass-love",
    "hamlet-ghost-father", "father-thou-king",
    "rosencrantz-guildenstern-king", "ophelia-queen-poloniu",
    "answer-sir-mother", "horatio-attend-gentleman"} *)

Extracting statistical thesauri

The statistical thesaurus extraction corresponds to the “paradigmatic” relationships between the terms, [MS1].

Here is an example over the State of Union speeches.

entryWords = {"bank", "war", "economy", "school", "port", "health", "enemy", "nuclear"};

lsaSpeeches⟹
  LSAMonExtractStatisticalThesaurus["Words" -> Map[WordData[#, "PorterStem"] &, entryWords], "NumberOfNearestNeighbors" -> 12]⟹
  LSAMonEchoStatisticalThesaurus;
LSAMon-Extracting-statistical-thesauri-echo
LSAMon-Extracting-statistical-thesauri-echo

In the code above: (i) the options signature style is used, (ii) the statistical thesaurus entry words are stemmed first.

We can also call LSAMonEchoStatisticalThesaurus directly without calling LSAMonExtractStatisticalThesaurus first.

 lsaSpeeches⟹
   LSAMonEchoStatisticalThesaurus["Words" -> Map[WordData[#, "PorterStem"] &, entryWords], "NumberOfNearestNeighbors" -> 12];
LSAMon-Extracting-statistical-thesauri-echo
LSAMon-Extracting-statistical-thesauri-echo

Mapping queries and documents to terms

One of the most natural operations is to find the representation of an arbitrary document (or sentence or a list of words) in monad’s Linear vector space of terms. This is done with the function LSAMonRepresentByTerms.

Here is an example in which a sentence is represented as a one-row matrix (in that space.)

obj =
  lsaHamlet⟹
   LSAMonRepresentByTerms["Hamlet, Prince of Denmark killed the king."]⟹
   LSAMonEchoValue;

Here we display only the non-zero columns of that matrix.

obj⟹
  LSAMonEchoFunctionValue[MatrixForm[Part[#, All, Keys[Select[SSparseMatrix`ColumnSumsAssociation[#], # > 0& ]]]]& ];

Transformation steps

Assume that LSAMonRepresentByTerms is given a list of sentences. Then that function performs the following steps.

1. The sentence is split into a list of words.

2. If monad’s document-term matrix was made by removing stop words the same stop words are removed from the list of words.

3. If monad’s document-term matrix was made by stemming the same stemming rules are applied to the list of words.

4. The LSI global weights and the LSI local weight and normalizer functions are applied to sentence’s contingency matrix.

Equivalent representation

Let us look convince ourselves that documents used in the monad to built the weighted document-term matrix have the same representation as the corresponding rows of that matrix.

Here is an association of documents from monad’s document collection.

inds = {6, 10};
queries = Part[lsaHamlet⟹LSAMonTakeDocuments, inds];
queries
 
(* <|"id.0006" -> "Getrude, Queen of Denmark, mother to Hamlet. Ophelia, daughter to Polonius.", 
     "id.0010" -> "ACT I. Scene I. Elsinore. A platform before the Castle."|> *)

lsaHamlet⟹
  LSAMonRepresentByTerms[queries]⟹
  LSAMonEchoFunctionValue[MatrixForm[Part[#, All, Keys[Select[SSparseMatrix`ColumnSumsAssociation[#], # > 0& ]]]]& ];
LSAMon-Mapping-queries-and-documents-to-topics-query-matrix
LSAMon-Mapping-queries-and-documents-to-topics-query-matrix
lsaHamlet⟹
  LSAMonEchoFunctionContext[MatrixForm[Part[Slot["weightedDocumentTermMatrix"], inds, Keys[Select[SSparseMatrix`ColumnSumsAssociation[Part[Slot["weightedDocumentTermMatrix"], inds, All]], # > 0& ]]]]& ];
LSAMon-Mapping-queries-and-documents-to-topics-context-sub-matrix
LSAMon-Mapping-queries-and-documents-to-topics-context-sub-matrix

Mapping queries and documents to topics

Another natural operation is to find the representation of an arbitrary document (or a list of words) in monad’s Linear vector space of topics. This is done with the function LSAMonRepresentByTopics.

Here is an example.

inds = {6, 10};
queries = Part[lsaHamlet⟹LSAMonTakeDocuments, inds];
Short /@ queries

(* <|"id.0006" -> "Getrude, Queen of Denmark, mother to Hamlet. Ophelia, daughter to Polonius.", 
     "id.0010" -> "ACT I. Scene I. Elsinore. A platform before the Castle."|> *)

lsaHamlet⟹
  LSAMonRepresentByTopics[queries]⟹
  LSAMonEchoFunctionValue[MatrixForm[Part[#, All, Keys[Select[SSparseMatrix`ColumnSumsAssociation[#], # > 0& ]]]]& ];
LSAMon-Mapping-queries-and-documents-to-terms-query-matrix
LSAMon-Mapping-queries-and-documents-to-terms-query-matrix
lsaHamlet⟹
  LSAMonEchoFunctionContext[MatrixForm[Part[Slot["W"], inds, Keys[Select[SSparseMatrix`ColumnSumsAssociation[Part[Slot["W"], inds, All]], # > 0& ]]]]& ];
LSAMon-Mapping-queries-and-documents-to-terms-query-matrix
LSAMon-Mapping-queries-and-documents-to-terms-query-matrix

Theory

In order to clarify what the function LSAMonRepresentByTopics is doing let us go through the formulas it is based on.

The original weighed document-term matrix M is decomposed into the matrix factors W and H.

M ≈ W.H, W ∈ ℝm × k, H ∈ ℝk × n

The i-th row of M is expressed with the i-th row of W multiplied by H.

mi ≈ wi.H.

For a query vector q0 ∈ ℝm we want to find its topics representation vector x ∈ ℝk:

q0 ≈ x.H.

Denote with H( − 1) the inverse or pseudo-inverse matrix of H. We have:

q0.H( − 1) ≈ (x.H).H( − 1) = x.(H.H( − 1)) = x.I,

x ∈ ℝk, H( − 1) ∈ ℝn × k, I ∈ ℝk × k.

In LSAMon for SVD H( − 1) = HT; for NNMF H( − 1) is the pseudo-inverse of H.

The vector x obtained with LSAMonRepresentByTopics.

Tags representation

Sometimes we want to find the topics representation of tags associated with monad’s documents and the tag-document associations are one-to-many. See [AA3].

Let us consider a concrete example – we want to find what topics correspond to the different presidents in the collection of State of Union speeches.

Here we find the document tags (president names in this case.)

tags = StringReplace[
   RowNames[
    lsaSpeeches⟹LSAMonTakeDocumentTermMatrix], 
   RegularExpression[".\\d\\d\\d\\d-\\d\\d-\\d\\d"] -> ""];
Short[tags]

Here is the number of unique tags (president names.)

Length[Union[tags]]
(* 42 *)

Here we compute the tag-topics representation matrix using the function LSAMonRepresentDocumentTagsByTopics.

tagTopicsMat =
 lsaSpeeches⟹
  LSAMonRepresentDocumentTagsByTopics[tags]⟹
  LSAMonTakeValue

Here is a heatmap plot of the tag-topics matrix made with the package “HeatmapPlot.m”, [AAp11].

HeatmapPlot[tagTopicsMat[[All, Ordering@ColumnSums[tagTopicsMat]]], DistanceFunction -> None, ImageSize -> Large]
LSAMon-Tags-representation-heatmap
LSAMon-Tags-representation-heatmap

Finding the most important documents

There are several algorithms we can apply for finding the most important documents in the collection. LSAMon utilizes two types algorithms: (1) graph centrality measures based, and (2) matrix factorization based. With certain graph centrality measures the two algorithms are equivalent. In this sub-section we demonstrate the matrix factorization algorithm (that uses SVD.)

Definition: The most important sentences have the most important words and the most important words are in the most important sentences.

That definition can be used to derive an iterations-based model that can be expressed with SVD or eigenvector finding algorithms, [LE1].

Here we pick an important part of the play “Hamlet”.

focusText = 
  First@Pick[textHamlet, StringMatchQ[textHamlet, ___ ~~ "to be" ~~ __ ~~ "or not to be" ~~ ___, IgnoreCase -> True]];
Short[focusText]

(* "Ham. To be, or not to be- that is the question: Whether 'tis ....y. 
    O, woe is me T' have seen what I have seen, see what I see!" *)

LSAMonUnit[StringSplit[ToLowerCase[focusText], {",", ".", ";", "!", "?"}]]⟹
  LSAMonMakeDocumentTermMatrix["StemmingRules" -> {}, "StopWords" -> Automatic]⟹
  LSAMonApplyTermWeightFunctions⟹
  LSAMonFindMostImportantDocuments[3]⟹
  LSAMonEchoFunctionValue[GridTableForm];
LSAMon-Find-most-important-documents-table
LSAMon-Find-most-important-documents-table

Setters, droppers, and takers

The values from the monad context can be set, obtained, or dropped with the corresponding “setter”, “dropper”, and “taker” functions as summarized in a previous section.

For example:

p = LSAMonUnit[textHamlet]⟹LSAMonMakeDocumentTermMatrix[Automatic, Automatic];

p⟹LSAMonTakeMatrix

If other values are put in the context they can be obtained through the (generic) function LSAMonTakeContext, [AAp1]:

Short@(p⟹QRMonTakeContext)["documents"]
 
(* <|"id.0001" -> "1604", "id.0002" -> "THE TRAGEDY OF HAMLET, PRINCE OF DENMARK", <<220>>, "id.0223" -> "THE END"|> *) 

Another generic function from [AAp1] is LSAMonTakeValue (used many times above.)

Here is an example of the “data dropper” LSAMonDropDocuments:

Keys[p⟹LSAMonDropDocuments⟹QRMonTakeContext]

(* {"documentTermMatrix", "terms", "stopWords", "stemmingRules"} *)

(The “droppers” simply use the state monad function LSAMonDropFromContext, [AAp1]. For example, LSAMonDropDocuments is equivalent to LSAMonDropFromContext[“documents”].)

The utilization of SSparseMatrix objects

The LSAMon monad heavily relies on SSparseMatrix objects, [AAp6, AA5], for internal representation of data and computation results.

A SSparseMatrix object is a matrix with named rows and columns.

Here is an example.

n = 6;
rmat = ToSSparseMatrix[
   SparseArray[{{1, 2} -> 1, {4, 5} -> 1}, {n, n}], 
   "RowNames" -> RandomSample[CharacterRange["A", "Z"], n], 
   "ColumnNames" -> RandomSample[CharacterRange["a", "z"], n]];
MatrixForm[rmat]
LSAMon-The-utilization-of-SSparseMatrix-random-matrix
LSAMon-The-utilization-of-SSparseMatrix-random-matrix

In this section we look into some useful SSparseMatrix idioms applied within LSAMon.

Visualize with sorted rows and columns

In some situations it is beneficial to sort rows and columns of the (weighted) document-term matrix.

docTermMat = 
  lsaSpeeches⟹LSAMonTakeDocumentTermMatrix;
MatrixPlot[docTermMat[[Ordering[RowSums[docTermMat]],  Ordering[ColumnSums[docTermMat]]]], MaxPlotPoints -> 300, ImageSize -> Large]
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeces-docTermMat-plot
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeces-docTermMat-plot

The most popular terms in the document collection can be found through the association of the column sums of the document-term matrix.

TakeLargest[ColumnSumsAssociation[lsaSpeeches⟹LSAMonTakeDocumentTermMatrix], 10]

(* <|"state" -> 8852, "govern" -> 8147, "year" -> 6362, "nation" -> 6182,
     "congress" -> 5040, "unit" -> 5040, "countri" -> 4504, 
     "peopl" -> 4306, "american" -> 3648, "law" -> 3496|> *)
     

Similarly for the lest popular terms.

TakeSmallest[
 ColumnSumsAssociation[
  lsaSpeeches⟹LSAMonTakeDocumentTermMatrix], 10]

(* <|"036" -> 1, "027" -> 1, "_____________________" -> 1, "0111" -> 1, 
     "006" -> 1, "0000" -> 1, "0001" -> 1, "______________________" -> 1, 
     "____" -> 1, "____________________" -> 1|> *)

Showing only non-zero columns

In some cases we want to show only columns of the data or computation results matrices that have non-zero elements.

Here is an example (similar to other examples in the previous section.)

lsaHamlet⟹
  LSAMonRepresentByTerms[{"this country is rotten", 
    "where is my sword my lord", 
    "poison in the ear should be in the play"}]⟹
  LSAMonEchoFunctionValue[ MatrixForm[#1[[All, Keys[Select[ColumnSumsAssociation[#1], #1 > 0 &]]]]] &];
LSAMon-The-utilization-of-SSparseMatrix-lsaHamlet-queries-to-terms-matrix
LSAMon-The-utilization-of-SSparseMatrix-lsaHamlet-queries-to-terms-matrix

In the pipeline code above: (i) from the list of queries a representation matrix is made, (ii) that matrix is assigned to the pipeline value, (iii) in the pipeline echo value function the non-zero columns are selected with by using the keys of the non-zero elements of the association obtained with ColumnSumsAssociation.

Similarities based on representation by terms

Here is way to compute the similarity matrix of different sets of documents that are not required to be in monad’s document collection.

sMat1 =
 lsaSpeeches⟹
  LSAMonRepresentByTerms[ aStateOfUnionSpeeches[[ Range[-5, -2] ]] ]⟹
  LSAMonTakeValue

sMat2 =
 lsaSpeeches⟹
  LSAMonRepresentByTerms[ aStateOfUnionSpeeches[[ Range[-7, -3] ]] ]⟹
  LSAMonTakeValue

MatrixForm[sMat1.Transpose[sMat2]]
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeches-terms-similarities-matrix
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeches-terms-similarities-matrix

Similarities based on representation by topics

Similarly to weighted Boolean similarities matrix computation above we can compute a similarity matrix using the topics representations. Note that an additional normalization steps is required.

sMat1 =
  lsaSpeeches⟹
   LSAMonRepresentByTopics[ aStateOfUnionSpeeches[[ Range[-5, -2] ]] ]⟹
   LSAMonTakeValue;
sMat1 = WeightTermsOfSSparseMatrix[sMat1, "None", "None", "Cosine"]

sMat2 =
  lsaSpeeches⟹
   LSAMonRepresentByTopics[ aStateOfUnionSpeeches[[ Range[-7, -3] ]] ]⟹ 
   LSAMonTakeValue;
sMat2 = WeightTermsOfSSparseMatrix[sMat2, "None", "None", "Cosine"]

MatrixForm[sMat1.Transpose[sMat2]]
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeches-topics-similarities-matrix
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeches-topics-similarities-matrix

Note the differences with the weighted Boolean similarity matrix in the previous sub-section – the similarities that are less than 1 are noticeably larger.

Unit tests

The development of LSAMon was done with two types of unit tests: (i) directly specified tests, [AAp7], and (ii) tests based on randomly generated pipelines, [AA8].

The unit test package should be further extended in order to provide better coverage of the functionalities and illustrate – and postulate – pipeline behavior.

Directly specified tests

Here we run the unit tests file “MonadicLatentSemanticAnalysis-Unit-Tests.wlt”, [AAp8].

AbsoluteTiming[
 testObject = TestReport["~/MathematicaForPrediction/UnitTests/MonadicLatentSemanticAnalysis-Unit-Tests.wlt"]
]

The natural language derived test ID’s should give a fairly good idea of the functionalities covered in [AAp3].

Values[Map[#["TestID"] &, testObject["TestResults"]]]

(* {"LoadPackage", "USASpeechesData", "HamletData", "StopWords", 
    "Make-document-term-matrix-1", "Make-document-term-matrix-2",
    "Apply-term-weights-1", "Apply-term-weights-2", "Topic-extraction-1",
    "Topic-extraction-2", "Topic-extraction-3", "Topic-extraction-4",
    "Statistical-thesaurus-1", "Topics-representation-1",
    "Take-document-term-matrix-1", "Take-weighted-document-term-matrix-1",
    "Take-document-term-matrix-2", "Take-weighted-document-term-matrix-2",
    "Take-terms-1", "Take-Factors-1", "Take-Factors-2", "Take-Factors-3",
    "Take-Factors-4", "Take-StopWords-1", "Take-StemmingRules-1"} *)

Random pipelines tests

Since the monad LSAMon is a DSL it is natural to test it with a large number of randomly generated “sentences” of that DSL. For the LSAMon DSL the sentences are LSAMon pipelines. The package “MonadicLatentSemanticAnalysisRandomPipelinesUnitTests.m”, [AAp9], has functions for generation of LSAMon random pipelines and running them as verification tests. A short example follows.

Generate pipelines:

SeedRandom[234]
pipelines = MakeLSAMonRandomPipelines[100];
Length[pipelines]

(* 100 *)

Here is a sample of the generated pipelines:

LSAMon-Unit-tests-random-pipelines-sample-table
LSAMon-Unit-tests-random-pipelines-sample-table

Here we run the pipelines as unit tests:

AbsoluteTiming[
 res = TestRunLSAMonPipelines[pipelines, "Echo" -> False];
]

From the test report results we see that a dozen tests failed with messages, all of the rest passed.

rpTRObj = TestReport[res]

(The message failures, of course, have to be examined – some bugs were found in that way. Currently the actual test messages are expected.)

Future plans

Dimension reduction extensions

It would be nice to extend the Dimension reduction functionalities of LSAMon to include other algorithms like Independent Component Analysis (ICA), [Wk5]. Ideally with LSAMon we can do comparisons between SVD, NNMF, and ICA like the image de-nosing based comparison explained in [AA8].

Another direction is to utilize Neural Networks for the topic extraction and making of statistical thesauri.

Conversational agent

Since LSAMon is a DSL it can be relatively easily interfaced with a natural language interface.

Here is an example of natural language commands parsed into LSA code using the package [AAp13].

LSAMon-Future-directions-parsed-LSA-commands-table
LSAMon-Future-directions-parsed-LSA-commands-table

Implementation notes

The implementation methodology of the LSAMon monad packages [AAp3, AAp9] followed the methodology created for the ClCon monad package [AAp10, AA6]. Similarly, this document closely follows the structure and exposition of the `ClCon monad document “A monad for classification workflows”, [AA6].

A lot of the functionalities and signatures of LSAMon were designed and programed through considerations of natural language commands specifications given to a specialized conversational agent.

References

Packages

[AAp1] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub*.

[AAp2] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub*.

[AAp3] Anton Antonov, Monadic Latent Semantic Analysis Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp4] Anton Antonov, Implementation of document-term matrix construction and re-weighting functions in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp5] Anton Antonov, Non-Negative Matrix Factorization algorithm implementation in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp6] Anton Antonov, SSparseMatrix Mathematica package, (2018), MathematicaForPrediction at GitHub.

[AAp7] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub.

[AAp8] Anton Antonov, Monadic Latent Semantic Analysis unit tests, (2019), MathematicaVsR at GitHub.

[AAp9] Anton Antonov, Monadic Latent Semantic Analysis random pipelines Mathematica unit tests, (2019), MathematicaForPrediction at GitHub.

[AAp10] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp11] Anton Antonov, Heatmap plot Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp12] Anton Antonov,
Independent Component Analysis Mathematica package, MathematicaForPrediction at GitHub.

[AAp13] Anton Antonov, Latent semantic analysis workflows grammar in EBNF, (2018), ConverasationalAgents at GitHub.

MathematicaForPrediction articles

[AA1] Anton Antonov, “Monad code generation and extension”, (2017), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, “Topic and thesaurus extraction from a document collection”, (2013), MathematicaForPrediction at GitHub.

[AA3] Anton Antonov, “The Great conversation in USA presidential speeches”, (2017), MathematicaForPrediction at WordPress.

[AA4] Anton Antonov, “Contingency tables creation examples”, (2016), MathematicaForPrediction at WordPress.

[AA5] Anton Antonov, “RSparseMatrix for sparse matrices with named rows and columns”, (2015), MathematicaForPrediction at WordPress.

[AA6] Anton Antonov, “A monad for classification workflows”, (2018), MathematicaForPrediction at WordPress.

[AA7] Anton Antonov, “Independent component analysis for multidimensional signals”, (2016), MathematicaForPrediction at WordPress.

[AA8] Anton Antonov, “Comparison of PCA, NNMF, and ICA over image de-noising”, (2016), MathematicaForPrediction at WordPress.

Other

[Wk1] Wikipedia entry, Monad,

[Wk2] Wikipedia entry, Latent semantic analysis,

[Wk3] Wikipedia entry, Distributional semantics,

[Wk4] Wikipedia entry, Non-negative matrix factorization,

[LE1] Lars Elden, Matrix Methods in Data Mining and Pattern Recognition, 2007, SIAM. ISBN-13: 978-0898716269.

[MB1] Michael W. Berry & Murray Browne, Understanding Search Engines: Mathematical Modeling and Text Retrieval, 2nd. ed., 2005, SIAM. ISBN-13: 978-0898715811.

[MS1] Magnus Sahlgren, “The Distributional Hypothesis”, (2008), Rivista di Linguistica. 20 (1): 33[Dash]53.

[PS1] Patrick Scheibe, Mathematica (Wolfram Language) support for IntelliJ IDEA, (2013-2018), Mathematica-IntelliJ-Plugin at GitHub.

Advertisements

Finding all structural breaks in time series

Introduction

In this document we show how to find the so called “structural breaks”, [Wk1], in a given time series. The algorithm is based in on a systematic application of Chow Test, [Wk2], combined with an algorithm for local extrema finding in noisy time series, [AA1].

The algorithm implementation is based on the packages “MonadicQuantileRegression.m”, [AAp1], and “MonadicStructuralBreaksFinder.m”, [AAp2]. The package [AAp1] provides the software monad QRMon that allows rapid and concise specification of Quantile Regression workflows. The package [AAp2] extends QRMon with functionalities related to structural breaks finding.

What is a structural break?

It looks like at least one type of “structural breaks” are defined through regression models, [Wk1]. Roughly speaking a structural break point of time series is a regressor point that splits the time series in such way that the obtained two parts have very different regression parameters.

One way to test such a point is to use Chow test, [Wk2]. From [Wk2] we have the definition:

The Chow test, proposed by econometrician Gregory Chow in 1960, is a test of whether the true coefficients in two linear regressions on different data sets are equal. In econometrics, it is most commonly used in time series analysis to test for the presence of a structural break at a period which can be assumed to be known a priori (for instance, a major historical event such as a war).

Example

Here is an example of the described algorithm application to the data from [Wk2].

QRMonUnit[data]⟹QRMonPlotStructuralBreakSplits[ImageSize -> Small];
IntroductionsExample
IntroductionsExample

Load packages

Here we load the packages [AAp1] and [AAp2].

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicQuantileRegression.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicStructuralBreaksFinder.m"]

Data used

In this section we assign the data used in this document.

Illustration data from Wikipedia

Here is the data used in the Wikipedia article “Chow test”, [Wk2].

data = {{0.08, 0.34}, {0.16, 0.55}, {0.24, 0.54}, {0.32, 0.77}, {0.4, 
    0.77}, {0.48, 1.2}, {0.56, 0.57}, {0.64, 1.3}, {0.72, 1.}, {0.8, 
    1.3}, {0.88, 1.2}, {0.96, 0.88}, {1., 1.2}, {1.1, 1.3}, {1.2, 
    1.3}, {1.3, 1.4}, {1.4, 1.5}, {1.4, 1.5}, {1.5, 1.5}, {1.6, 
    1.6}, {1.7, 1.1}, {1.8, 0.98}, {1.8, 1.1}, {1.9, 1.4}, {2., 
    1.3}, {2.1, 1.5}, {2.2, 1.3}, {2.2, 1.3}, {2.3, 1.2}, {2.4, 
    1.1}, {2.5, 1.1}, {2.6, 1.2}, {2.6, 1.4}, {2.7, 1.3}, {2.8, 
    1.6}, {2.9, 1.5}, {3., 1.4}, {3., 1.8}, {3.1, 1.4}, {3.2, 
    1.4}, {3.3, 1.4}, {3.4, 2.}, {3.4, 2.}, {3.5, 1.5}, {3.6, 
    1.8}, {3.7, 2.1}, {3.8, 1.6}, {3.8, 1.8}, {3.9, 1.9}, {4., 2.1}};
ListPlot[data]
DataUsedWk2
DataUsedWk2

S&P 500 Index

Here we get the time series corresponding to S&P 500 Index.

tsSP500 = FinancialData[Entity["Financial", "^SPX"], {{2015, 1, 1}, Date[]}]
DateListPlot[tsSP500, ImageSize -> Medium]
DataUsedSP500
DataUsedSP500

Application of Chow Test

The Chow Test statistic is implemented in [AAp1]. In this document we rely on the relative comparison of the Chow Test statistic values: the larger the value of the Chow test statistic, the more likely we have a structural break.

Here is how we can apply the Chow Test with a QRMon pipeline to the [Wk2] data given above.

chowStats =
  QRMonUnit[data]⟹
   QRMonChowTestStatistic[Range[1, 3, 0.05], {1, x}]⟹
   QRMonTakeValue;

We see that the regressor points $Failed and 1.7 have the largest Chow Test statistic values.

Block[{chPoint = TakeLargestBy[chowStats, Part[#, 2]& , 1]}, 
ListPlot[{chowStats, chPoint}, Filling -> Axis, PlotLabel -> Row[{"Point with largest Chow Test statistic:", 
Spacer[8], chPoint}]]]
ApplicationOfChowTestchowStats
ApplicationOfChowTestchowStats

The first argument of QRMonChowTestStatistic is a list of regressor points or Automatic. The second argument is a list of functions to be used for the regressions.

Here is an example of an automatic values call.

chowStats2 = QRMonUnit[data]⟹QRMonChowTestStatistic⟹QRMonTakeValue;
ListPlot[chowStats2, GridLines -> {
Part[
Part[chowStats2, All, 1], 
OutlierIdentifiers`OutlierPosition[
Part[chowStats2, All, 2],  OutlierIdentifiers`SPLUSQuartileIdentifierParameters]], None}, GridLinesStyle -> Directive[{Orange, Dashed}], Filling -> Axis]
ApplicationOfChowTestchowStats2
ApplicationOfChowTestchowStats2

For the set of values displayed above we can apply simple 1D outlier identification methods, [AAp3], to automatically find the structural break point.

chowStats2[[All, 1]][[OutlierPosition[chowStats2[[All, 2]], SPLUSQuartileIdentifierParameters]]]
(* {1.7} *)

OutlierPosition[chowStats2[[All, 2]], SPLUSQuartileIdentifierParameters]
(* {20} *)

We cannot use that approach for finding all structural breaks in the general time series cases though as exemplified with the following code using the time series S&P 500 Index.

chowStats3 = QRMonUnit[tsSP500]⟹QRMonChowTestStatistic⟹QRMonTakeValue;
DateListPlot[chowStats3, Joined -> False, Filling -> Axis]
ApplicationOfChowTestSP500
ApplicationOfChowTestSP500
OutlierPosition[chowStats3[[All, 2]], SPLUSQuartileIdentifierParameters]
(* {} *)
 
OutlierPosition[chowStats3[[All, 2]], HampelIdentifierParameters]
(* {} *)

In the rest of the document we provide an algorithm that works for general time series.

Finding all structural break points

Consider the problem of finding of all structural breaks in a given time series. That can be done (reasonably well) with the following procedure.

  1. Chose functions for testing for structural breaks (usually linear.)
  2. Apply Chow Test over dense enough set of regressor points.
  3. Make a time series of the obtained Chow Test statistics.
  4. Find the local maxima of the Chow Test statistics time series.
  5. Determine the most significant break point.
  6. Plot the splits corresponding to the found structural breaks.

QRMon has a function, QRMonFindLocalExtrema, for finding local extrema; see [AAp1, AA1]. For the goal of finding all structural breaks, that semi-symbolic algorithm is the crucial part in the steps above.

Computation

Chose fitting functions

fitFuncs = {1, x};

Find Chow test statistics local maxima

The computation below combines steps 2,3, and 4.

qrObj =
  QRMonUnit[tsSP500]⟹
   QRMonFindChowTestLocalMaxima["Knots" -> 20, 
    "NearestWithOutliers" -> True, 
    "NumberOfProximityPoints" -> 5, "EchoPlots" -> True, 
    "DateListPlot" -> True, 
    ImageSize -> Medium]⟹
   QRMonEchoValue;
ComputationLocalMaxima
ComputationLocalMaxima

Find most significant structural break point

splitPoint = TakeLargestBy[qrObj⟹QRMonTakeValue, #[[2]] &, 1][[1, 1]]

Plot structural breaks splits and corresponding fittings

Here we just make the plots without showing them.

sbPlots =
  QRMonUnit[tsSP500]⟹
   QRMonPlotStructuralBreakSplits[(qrObj⟹ QRMonTakeValue)[[All, 1]], 
    "LeftPartColor" -> Gray, "DateListPlot" -> True, 
    "Echo" -> False, 
    ImageSize -> Medium]⟹
   QRMonTakeValue;
   

The function QRMonPlotStructuralBreakSplits returns an association that has as keys paired split points and Chow Test statistics; the plots are association’s values.

Here we tabulate the plots with plots with most significant breaks shown first.

Multicolumn[
 KeyValueMap[
  Show[#2, PlotLabel -> 
     Grid[{{"Point:", #1[[1]]}, {"Chow Test statistic:", #1[[2]]}}, Alignment -> Left]] &, KeySortBy[sbPlots, -#[[2]] &]], 2]
ComputationStructuralBreaksPlots
ComputationStructuralBreaksPlots

Future plans

We can further apply the algorithm explained above to identifying time series states or components. The structural break points are used as knots in appropriate Quantile Regression fitting. Here is an example.

The plan is to develop such an identifier of time series states in the near future. (And present it at WTC-2019.)

FuturePlansTimeSeriesStates
FuturePlansTimeSeriesStates

References

Articles

[Wk1] Wikipedia entry, Structural breaks.

[Wk2] Wikipedia entry, Chow test.

[AA1] Anton Antonov, “Finding local extrema in noisy data using Quantile Regression”, (2019), MathematicaForPrediction at WordPress.

[AA2] Anton Antonov, “A monad for Quantile Regression workflows”, (2018), MathematicaForPrediction at GitHub.

Packages

[AAp1] Anton Antonov, Monadic Quantile Regression Mathematica package, (2018), MathematicaForPrediction at GitHub.

[AAp2] Anton Antonov, Monadic Structural Breaks Finder Mathematica package, (2019), MathematicaForPrediction at GitHub.

[AAp3] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub.

Videos

[AAv1] Anton Antonov, Structural Breaks with QRMon, (2019), YouTube.

Call graph generation for context functions

In brief

This document describes the package CallGraph.m for making call graphs between the functions that belong to specified contexts.

The main function is CallGraph that gives a graph with vertices that are functions names and edges that show which functions call which other functions. With the default option values the graph vertices are labeled and have tooltips with function usage messages.

General design

The call graphs produced by the main package function CallGraph are assumed to be used for studying or refactoring of large code bases written with Mathematica / Wolfram Language.

The argument of CallGraph is a context string or a list of context strings.

With the default values of its options CallGraph produces a graph with labeled nodes and the labels have tooltips that show the usage messages of the functions from the specified contexts. It is assumed that this would be the most useful call graph type for studying the codes of different sets of packages.

We can make simple, non-label, non-tooltip call graph using CallGraph[ ... , "UsageTooltips" -> False ].

The simple call graph can be modified with the functions:

CallGraphAddUsageMessages, CallGraphAddPrintDefinitionsButtons, CallGraphBiColorCircularEmbedding

Each of those functions is decorating the simple graph in a particular way.

Package load

This loads the package CallGraph.m :

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/Misc/CallGraph.m"]

The following packages are used in the examples below.

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicQuantileRegression.m"]

Get["https://raw.githubusercontent.com/szhorvat/IGraphM/master/IGInstaller.m"];
Needs["IGraphM`"]

Usage examples

Generate a call graph with usage tooltips

CallGraph["IGraphM`", GraphLayout -> "SpringElectricalEmbedding", ImageSize -> Large]

IGraphM-call-graph-with-usage-tooltips

Generate a call graph by excluding symbols

gr = CallGraph["IGraphM`", Exclusions -> Map[ToExpression, Names["IG*Q"]], ImageSize -> 900]

IGraphM-call-graph-with-exclusions

Generate call graph with buttons to print definitions

gr0 = CallGraph["IGraphM`", "UsageTooltips" -> False];
gr1 = CallGraphAddPrintDefinitionsButtons[gr0, GraphLayout -> "StarEmbedding", ImageSize -> 900]

IGraphM-call-graph-with-definition-buttons

Generate circular embedding graph color

cols = RandomSample[ ColorData["Rainbow"] /@ Rescale[Range[VertexCount[gr1]]]];

CallGraphBiColorCircularEmbedding[ gr1, "VertexColors" -> cols, ImageSize -> 900 ]

IGraphM-call-graph-bicolor-Bezie

(The core functions used for the implementation of CallGraphBiColorCircularEmbedding were taken from kglr’s Mathematica Stack Exchange answer: https://mathematica.stackexchange.com/a/188390/34008 . Those functions were modified to take additional arguments.)

Options

The package functions "CallGraph*" take all of the options of the function Graph. Below are described the additional options of CallGraph.

  • “PrivateContexts”
    Should the functions of the private contexts be included in the call graph.

  • “SelfReferencing”
    Should the self referencing edges be excluded or not.

  • “AtomicSymbols”
    Should atomic symbols be included in the call graph.

  • Exclusions
    Symbols to be excluded from the call graph.

  • “UsageTooltips”
    Should vertex labels with the usage tooltips be added.

  • “UsageTooltipsStyle”
    The style of the usage tooltips.

Possible issues

  • With large context (e.g. “System`”) the call graph generation might take long time. (See the TODOs below.)

  • With “PrivateContexts”->False the call graph will be empty if the public functions do not depend on each other.

  • For certain packages the scanning of the down values would produce (multiple) error messages or warnings.

Future plans

The following is my TODO list for this project.

  1. Special handling for the “System`” context.

  2. Use the symbols up-values to make the call graph.

  3. Consider/implement call graph making with specified patterns and list of symbols.

    • Instead of just using contexts and exclusions. (The current approach/implementation.)
  4. Provide special functions for “call sequence” tracing for a specified symbol.

Parametrized event records data transformations

Introduction

In this document we describe transformations of events records data in order to make that data more amenable for the application of Machine Learning (ML) algorithms.

Consider the following problem formulation (done with the next five bullet points.)

  • From data representing a (most likely very) diverse set of events we want to derive contingency matrices corresponding to each of the variables in that data.

  • The events are observations of the values of a certain set of variables for a certain set of entities. Not all entities have events for all variables.

  • The observation times do not form a regular time grid.

  • Each contingency matrix has rows corresponding to the entities in the data and has columns corresponding to time.

  • The software component providing the functionality should allow parametrization and repeated execution. (As in ML classifier training and testing scenarios.)

The phrase “event records data” is used instead of “time series” in order to emphasize that (i) some variables have categorical values, and (ii) the data can be given in some general database form, like transactions long-form.

The required transformations of the event records in the problem formulation above are done through the monad ERTMon, [AAp3]. (The name “ERTMon” comes from “Event Records Transformations Monad”.)

The monad code generation and utilization is explained in [AA1] and implemented with [AAp1].

It is assumed that the event records data is put in a form that makes it (relatively) easy to extract time series for the set of entity-variable pairs present in that data.

In brief ERTMon performs the following sequence of transformations.

  1. The event records of each entity-variable pair are shifted to adhere to a specified start or end point,

  2. The event records for each entity-variable pair are aggregated and normalized with specified functions over a specified regular grid,

  3. Entity vs. time interval contingency matrices are made for each combination of variable and aggregation function.

The transformations are specified with a “computation specification” dataset.

Here is an example of an ERTMon pipeline over event records:

ERTMon-small-pipeline-example
ERTMon-small-pipeline-example

The rest of the document describes in detail:

  • the structure, format, and interpretation of the event records data and computations specifications,

  • the transformations of time series aligning, aggregation, and normalization,

  • the software pattern design – a monad – that allows sequential specifications of desired transformations.

Concrete examples are given using weather data. See [AAp9].

Package load

The following commands load the packages [AAp1-AAp9].

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicEventRecordsTransformations.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/Misc/WeatherEventRecords.m"]

Data load

The data we use is weather data from meteorological stations close to certain major cities. We retrieve the data with the function WeatherEventRecords from the package [AAp9].

?WeatherEventRecords

WeatherEventRecords[ citiesSpec_: {{_String, _String}..}, dateRange:{{_Integer, _Integer, _Integer}, {_Integer, _Integer, _Integer}}, wProps:{_String..} : {“Temperature”}, nStations_Integer : 1 ] gives an association with event records data.

citiesSpec = {{"Miami", "USA"}, {"Chicago", "USA"}, {"London",  "UK"}};
dateRange = {{2017, 7, 1}, {2018, 6, 31}};
wProps = {"Temperature", "MaxTemperature", "Pressure", "Humidity", "WindSpeed"};
res1 = WeatherEventRecords[citiesSpec, dateRange, wProps, 1];

citiesSpec = {{"Jacksonville", "USA"}, {"Peoria", "USA"}, {"Melbourne", "Australia"}};
dateRange = {{2016, 12, 1}, {2017, 12, 31}};
res2 = WeatherEventRecords[citiesSpec, dateRange, wProps, 1];

Here we assign the obtained datasets to variables we use below:

eventRecords = Join[res1["eventRecords"], res2["eventRecords"]];
entityAttributes = Join[res1["entityAttributes"], res2["entityAttributes"]];

Here are the summaries of the datasets eventRecords and entityAttributes:

RecordsSummary[eventRecords]
ERTMon-RecordsSummary-eventRecord
ERTMon-RecordsSummary-eventRecord
RecordsSummary[entityAttributes]
ERTMon-RecordsSummary-entityAttributes
ERTMon-RecordsSummary-entityAttributes

Design considerations

Workflow

The steps of the main event records transformations workflow addressed in this document follow.

  1. Ingest event records and entity attributes given in the Star schema style.

  2. Ingest a computation specification.

    1. Specified are aggregation time intervals, aggregation functions, normalization types and functions.
  3. Group event records based on unique entity ID and variable pairs.
    1. Additional filtering can be applied using the entity attributes.
  4. For each variable find descriptive statistics properties.
    1. This is to facilitate normalization procedures.

    2. Optionally, for each variable find outlier boundaries.

  5. Align each group of records to start or finish at some specified point.

    1. For each variable we want to impose a regular time grid.
  6. From each group of records produce a time series.

  7. For each time series do prescribed aggregation and normalization.

    1. The variable that corresponds to each group of records has at least one (possibly several) computation specifications.
  8. Make a contingency matrix for each time series obtained in the previous step.
    1. The contingency matrices have entity ID’s as rows, and time intervals enumerating values of time intervals.

The following flow-chart corresponds to the list of steps above.

ERTMon-workflows
ERTMon-workflows

A corresponding monadic pipeline is given in the section “Larger example pipeline”.

Feature engineering perspective

The workflow above describes a way to do feature engineering over a collection of event records data. For a given entity ID and a variable we derive several different time series.

Couple of examples follow.

  • One possible derived feature (times series) is for each entity-variable pair we make time series of the hourly mean value in each of the eight most recent hours for that entity. The mean values are normalized by the average values of the records corresponding to that entity-variable pair.

  • Another possible derived feature (time series) is for each entity-variable pair to make a time series with the number of outliers in the each half-hour interval, considering the most recent 20 half-hour intervals. The outliers are found by using outlier boundaries derived by analyzing all values of the corresponding variable, across all entities.

From the examples above – and some others – we conclude that for each feature we want to be able to specify:

  • maximum history length (say from the most recent observation),

  • aggregation interval length,

  • aggregation function (to be applied in each interval),

  • normalization function (per entity, per cohort of entities, per variable),

  • conversion of categorical values into numerical ones.

Repeated execution

We want to be able to do repeated executions of the specified workflow steps.

Consider the following scenario. After the event records data is converted to a entity-vs-feature contingency matrix, we use that matrix to train and test a classifier. We want to find the combination of features that gives the best classifier results. For that reason we want to be able to easily and systematically change the computation specifications (interval size, aggregation and normalization functions, etc.) With different computation specifications we obtain different entity-vs-feature contingency matrices, that would have different performance with different classifiers.

Using the classifier training and testing scenario we see that there is another repeated execution perspective: after the feature engineering is done over the training data, we want to be able to execute exactly the same steps over the test data. Note that with the training data we find certain global or cohort normalization values and outlier boundaries that have to be used over the test data. (Not derived from the test data.)

The following diagram further describes the repeated execution workflow.

ERTMon-repeated-execution-workflow
ERTMon-repeated-execution-workflow

Further discussion of making and using ML classification workflows through the monad software design pattern can be found in [AA2].

Event records data design

The data is structured to follow the style of Star schema. We have event records dataset (table) and entity attributes dataset (table).

The structure datasets (tables) proposed satisfy a wide range of modeling data requirements. (Medical and financial modeling included.)

Entity event data

The entity event data has the columns “EntityID”, “LocationID”, “ObservationTime”, “Variable”, “Value”.

RandomSample[eventRecords, 6]
ERTMon-eventRecords-sample
ERTMon-eventRecords-sample

Most events can be described through “Entity event data”. The entities can be anything that produces a set of event data: financial transactions, vital sign monitors, wind speed sensors, chemical concentrations sensors.

The locations can be anything that gives the events certain “spatial” attributes: medical units in hospitals, sensors geo-locations, tiers of financial transactions.

Entity attributes data

The entity attributes dataset (table) has attributes (immutable properties) of the entities. (Like, gender and race for people, longitude and latitude for wind speed sensors.)

entityAttributes[[1 ;; 6]]
ERTMon-entityAttributes-sample
ERTMon-entityAttributes-sample

Example

For example, here we take all weather stations in USA:

ws = Normal[entityAttributes[Select[#Attribute == "Country" && #Value == "USA" &], "EntityID"]]

(* {"KMFL", "KMDW", "KNIP", "KGEU"} *)

Here we take all temperature event records for those weather stations:

srecs = eventRecords[Select[#Variable == "Temperature" && MemberQ[ws, #EntityID] &]];

And here plot the corresponding time series obtained by grouping the records by station (entity ID’s) and taking the columns “ObservationTime” and “Value”:

grecs = Normal @ GroupBy[srecs, #EntityID &][All, All, {"ObservationTime", "Value"}];
DateListPlot[grecs, ImageSize -> Large, PlotTheme -> "Detailed"]
ERTMon-DateListPlot-USA-Temperature
ERTMon-DateListPlot-USA-Temperature

Monad elements

This section goes through the steps of the general ERTMon workflow. For didactic purposes each sub-section changes the pipeline assigned to the variable p. Of course all functions can be chained into one big pipeline as shown in the section “Larger example pipeline”.

Monad unit

The monad is initialized with ERTMonUnit.

ERTMonUnit[]

(* ERTMon[None, <||>] *)

Ingesting event records and entity attributes

The event records dataset (table) and entity attributes dataset (table) are set with corresponding setter functions. Alternatively, they can be read from files in a specified directory.

p =
  ERTMonUnit[]⟹
   ERTMonSetEventRecords[eventRecords]⟹
   ERTMonSetEntityAttributes[entityAttributes]⟹
   ERTMonEchoDataSummary;
   
ERTMon-echo-data-summary
ERTMon-echo-data-summary

Computation specification

Using the package [AAp3] we can create computation specification dataset. Below is given an example of constructing a fairly complicated computation specification.

The package function EmptyComputationSpecificationRow can be used to construct the rows of the specification.

EmptyComputationSpecificationRow[]

(* <|"Variable" -> Missing[], "Explanation" -> "", 
    "MaxHistoryLength" -> 3600, "AggregationIntervalLength" -> 60, 
    "AggregationFunction" -> "Mean", "NormalizationScope" -> "Entity", 
    "NormalizationFunction" -> "None"|> *)
    
    
compSpecRows = 
  Join[EmptyComputationSpecificationRow[], <|"Variable" -> #, 
      "MaxHistoryLength" -> 60*24*3600, 
      "AggregationIntervalLength" -> 2*24*3600, 
      "AggregationFunction" -> "Mean", 
      "NormalizationScope" -> "Entity", 
      "NormalizationFunction" -> "Mean"|>] & /@ 
   Union[Normal[eventRecords[All, "Variable"]]];
compSpecRows =
  Join[
   compSpecRows, 
   Join[EmptyComputationSpecificationRow[], <|"Variable" -> #, 
       "MaxHistoryLength" -> 60*24*3600, 
       "AggregationIntervalLength" -> 2*24*3600, 
       "AggregationFunction" -> "Range", 
       "NormalizationScope" -> "Country", 
       "NormalizationFunction" -> "Mean"|>] & /@ 
    Union[Normal[eventRecords[All, "Variable"]]],
   Join[EmptyComputationSpecificationRow[], <|"Variable" -> #, 
       "MaxHistoryLength" -> 60*24*3600, 
       "AggregationIntervalLength" -> 2*24*3600, 
       "AggregationFunction" -> "OutliersCount", 
       "NormalizationScope" -> "Variable"|>] & /@ 
    Union[Normal[eventRecords[All, "Variable"]]]
   ];

The constructed rows are assembled into a dataset (with Dataset). The function ProcessComputationSpecification is used to convert a user-made specification dataset into a form used by ERTMon.

wCompSpec = 
 ProcessComputationSpecification[Dataset[compSpecRows]][SortBy[#Variable &]]
 
ERTMon-wCompSpec
ERTMon-wCompSpec

The computation specification is set to the monad with the function ERTMonSetComputationSpecification.

Alternatively, a computation specification can be created and filled-in as a CSV file and read into the monad. (Not described here.)

Grouping event records by entity-variable pairs

With the function ERTMonGroupEntityVariableRecords we group the event records by the found unique entity-variable pairs. Note that in the pipeline below we set the computation specification first.

p =
  p⟹
   ERTMonSetComputationSpecification[wCompSpec]⟹
   ERTMonGroupEntityVariableRecords;

Descriptive statistics (per variable)

After the data is ingested into the monad and the event records are grouped per entity-variable pairs we can find certain descriptive statistics for the data. This is done with the general function ERTMonComputeVariableStatistic and the specialized function ERTMonFindVariableOutlierBoundaries.

p⟹ERTMonComputeVariableStatistic[RecordsSummary]⟹ERTMonEchoValue;
ERTMon-compute-variable-statistic-1
ERTMon-compute-variable-statistic-1
p⟹ERTMonComputeVariableStatistic⟹ERTMonEchoValue;
ERTMon-compute-variable-statistic-2
ERTMon-compute-variable-statistic-2
p⟹ERTMonComputeVariableStatistic[TakeLargest[#, 3] &]⟹ERTMonEchoValue;

(* value: <|Humidity->{1.,1.,0.993}, MaxTemperature->{48,48,48},
            Pressure->{1043.1,1042.8,1041.1}, Temperature->{42.28,41.94,41.89},
            WindSpeed->{54.82,44.63,44.08}|> *)
            
            

Finding the variables outlier boundaries

The finding of outliers counts and fractions can be specified in the computation specification. Because of this there is a specialized function for outlier finding ERTMonFindVariableOutlierBoundaries. That function makes the association of the found variable outlier boundaries (i) to be the pipeline value and (ii) to be the value of context key “variableOutlierBoundaries”. The outlier boundaries are found using the functions of the package [AAp6].

If no argument is specified ERTMonFindVariableOutlierBoundaries uses the Hampel identifier (HampelIdentifierParameters).

p⟹ERTMonFindVariableOutlierBoundaries⟹ERTMonEchoValue;

(* value: <|Humidity->{0.522536,0.869464}, MaxTemperature->{14.2106,31.3494},
            Pressure->{1012.36,1022.44}, Temperature->{9.88823,28.3318},
            WindSpeed->{5.96141,19.4086}|> *)
            
Keys[p⟹ERTMonFindVariableOutlierBoundaries⟹ERTMonTakeContext]

(* {"eventRecords", "entityAttributes", "computationSpecification",
    "entityVariableRecordGroups", "variableOutlierBoundaries"} *)
   

In the rest of document we use the outlier boundaries found with the more conservative identifier SPLUSQuartileIdentifierParameters.

p =
  p⟹
   ERTMonFindVariableOutlierBoundaries[SPLUSQuartileIdentifierParameters]⟹
   ERTMonEchoValue;

(* value: <|Humidity->{0.176,1.168}, MaxTemperature->{-1.67,45.45},
            Pressure->{1003.75,1031.35}, Temperature->{-5.805,43.755},
            WindSpeed->{-5.005,30.555}|> *)

Conversion of event records to time series

The grouped event records are converted into time series with the function ERTMonEntityVariableGroupsToTimeSeries. The time series are aligned to a time point specification given as an argument. The argument can be: a date object, “MinTime”, “MaxTime”, or “None”. (“MaxTime” is the default.)

p⟹
  ERTMonEntityVariableGroupsToTimeSeries["MinTime"]⟹
  ERTMonEchoFunctionContext[#timeSeries[[{1, 3, 5}]] &];
  
ERTMon-records-groups-minTime
ERTMon-records-groups-minTime

Compare the last output with the output of the following command.

p =
  p⟹
   ERTMonEntityVariableGroupsToTimeSeries["MaxTime"]⟹
   ERTMonEchoFunctionContext[#timeSeries[[{1, 3, 5}]] &];
ERTMon-records-groups-maxTime
ERTMon-records-groups-maxTime

Time series restriction and aggregation.

The main goal of ERTMon is to convert a diverse, general collection of event records into a collection of aligned time series over specified regular time grids.

The regular time grids are specified with the columns “MaxHistoryLength” and “AggregationIntervalLength” of the computation specification. The time series of the variables in the computation specification are restricted to the corresponding maximum history lengths and are aggregated using the corresponding aggregation lengths and functions.

p =
  p⟹
   ERTMonAggregateTimeSeries⟹
   ERTMonEchoFunctionContext[DateListPlot /@ #timeSeries[[{1, 3, 5}]] &];
   
ERTMon-restriction-and-aggregation
ERTMon-restriction-and-aggregation

Application of time series functions

At this point we can apply time series modifying functions. An often used such function is moving average.

p⟹
  ERTMonApplyTimeSeriesFunction[MovingAverage[#, 6] &]⟹
  ERTMonEchoFunctionValue[DateListPlot /@ #[[{1, 3, 5}]] &];
ERTMon-moving-average-application
ERTMon-moving-average-application

Note that the result is given as a pipeline value, the value of the context key “timeSeries” is not changed.

(In the future, the computation specification and its handling might be extended to handle moving average or other time series function specifications.)

Normalization

With “normalization” we mean that the values of a given time series values are divided (normalized) with a descriptive statistic derived from a specified set of values. The specified set of values is given with the parameter “NormalizationScope” in computation specification.

At the normalization stage each time series is associated with an entity ID and a variable.

Normalization is done at three different scopes: “entity”, “attribute”, and “variable”.

Given a time series T(i,var) corresponding to entity ID i and a variable var we define the normalization values for the different scopes in the following ways.

  • Normalization with scope “entity” means that the descriptive statistic is derived from the values of T(i,var) only.

  • Normalization with scope attribute means that

    • from the entity attributes dataset we find attribute value that corresponds to i,

    • next we find all entity ID’s that are associated with the same attribute value,

    • next we find value of normalization descriptive statistic using the time series that correspond to the variable var and the entity ID’s found in the previous step.

  • Normalization with scope “variable” means that the descriptive statistic is derived from the values of all time series corresponding to var.

Note that the scope “entity” is the most granular, and the scope “variable” is the coarsest.

The following command demonstrates the normalization effect – compare the y-axes scales of the time series corresponding to the same entity-variable pair.

p =
  p⟹
   ERTMonEchoFunctionContext[DateListPlot /@ #timeSeries[[{1, 3, 5}]] &]⟹
   ERTMonNormalize⟹
   ERTMonEchoFunctionContext[DateListPlot /@ #timeSeries[[{1, 3, 5}]] &];
   
ERTMon-normalization
ERTMon-normalization

Here are the normalization values that should be used when normalizing “unseen data.”

p⟹ERTMonTakeNormalizationValues

(* <|{"Humidity.Range", "Country", "USA"} -> 0.0864597, 
  {"Humidity.Range", "Country", "UK"} -> 0.066, 
  {"Humidity.Range", "Country", "Australia"} -> 0.145968, 
  {"MaxTemperature.Range", "Country", "USA"} -> 2.85468, 
  {"MaxTemperature.Range", "Country", "UK"} -> 78/31, 
  {"MaxTemperature.Range", "Country", "Australia"} -> 3.28871, 
  {"Pressure.Range", "Country", "USA"} -> 2.08222, 
  {"Pressure.Range", "Country", "Australia"} -> 3.33871, 
  {"Temperature.Range", "Country", "USA"} -> 2.14411, 
  {"Temperature.Range", "Country", "UK"} -> 1.25806, 
  {"Temperature.Range", "Country", "Australia"} -> 2.73032, 
  {"WindSpeed.Range", "Country", "USA"} -> 4.13532, 
  {"WindSpeed.Range", "Country", "UK"} -> 3.62097, 
  {"WindSpeed.Range", "Country", "Australia"} -> 3.17226|> *)

Making contingency matrices

One of the main goals of ERTMon is to produce contingency matrices corresponding to the event records data.

The contingency matrices are created and stored as SSparseMatrix objects, [AAp7].

p =
  p⟹ERTMonMakeContingencyMatrices;

We can obtain an association of the contingency matrices for each variable-and-aggregation-function pair, or obtain the overall contingency matrix.

p⟹ERTMonTakeContingencyMatrices
Dimensions /@ %
ERTMon-contingency-matrices
ERTMon-contingency-matrices
smat = p⟹ERTMonTakeContingencyMatrix;
MatrixPlot[smat, ImageSize -> 700]
ERTMon-contingency-matrix
ERTMon-contingency-matrix
RowNames[smat]

(* {"EGLC", "KGEU", "KMDW", "KMFL", "KNIP", "WMO95866"} *)

Larger example pipeline

The pipeline shown in this section utilizes all main workflow functions of ERTMon. The used weather data and computation specification are described above.

ERTMon-large-pipeline-example
ERTMon-large-pipeline-example
ERTMon-large-pipeline-example-output
ERTMon-large-pipeline-example-output

References

Packages

[AAp1] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub*. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

[AAp2] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub*. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

[AAp3] Anton Antonov, Monadic Event Records Transformations Mathematica package, (2018), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicEventRecordsTransformations.m .

[AAp4] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

[AAp5] Anton Antonov, Cross tabulation implementation in Mathematica, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/CrossTabulate.m .

[AAp6] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/OutlierIdentifiers.m.

[AAp7] Anton Antonov, SSparseMatrix Mathematica package, (2018), MathematicaForPrediction at GitHub*. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/SSparseMatrix.m .

[AAp8] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

[AAp9] Anton Antonov, Weather event records data Mathematica package, (2018), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/Misc/WeatherEventRecords.m .

Documents

[AA1] Anton Antonov, Monad code generation and extension, (2017), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2017/06/23/monad-code-generation-and-extension .

[AA1a] Anton Antonov, Monad code generation and extension, (2017), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, A monad for classification workflows, (2018), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2018/05/15/a-monad-for-classification-workflows .

QRMon for some credit risk article

Introduction

In this notebook/document we apply the monad QRMon [3] over data of the article [1]. In order to get the data we use extraction procedure described in [2].

(I saw the article [1] while browsing LinkedIn today. I met one of the authors during the event "Data Science Salon Miami Feb 2018".)

Extract data

I extracted the data from the image using "Recovering data points from an image".

img = Import["https://www.spglobal.com/_assets/images/marketintelligence/blog-images/demonstration-of-model-fit-comparison-visualization.png"]
enter image description here

enter image description here

extractedData
(* {{-1., 0.284894}, {-0.987395, 0.340483}, {-0.966387, 
  0.215408}, {-0.941176, 0.416918}, {-0.894958, 0.222356}, {-0.890756,
   0.215408}, {-0.878151, 0.0903323}, {-0.848739, 
  0.132024}, {-0.844538, 0.10423}, {-0.831933, 0.333535}, {-0.819328, 
  0.180665}, {-0.781513, 0.423867}, {-0.756303, 0.40997}, {-0.752101, 
  0.528097}, {-0.747899, 0.416918}, {-0.731092, 0.375227}, {-0.714286,
   0.194562}, {-0.710084, 0.340483}, {-0.651261, 
  0.555891}, {-0.647059, 0.333535}, {-0.605042, 0.496828}, {-0.57563, 
  0.}, {-0.512605, 0.354381}, {-0.491597, 0.368278}, {-0.487395, 
  0.472508}, {-0.478992, 0.479456}, {-0.453782, 0.437764}, {-0.357143,
   0.15287}, {-0.344538, 0.340483}, {-0.331933, 0.333535}, {-0.315126,
   0.500302}, {-0.285714, 0.396073}, {-0.247899, 
  0.618429}, {-0.201681, 0.541994}, {-0.159664, 0.680967}, {-0.10084, 
  1.06314}, {-0.0966387, 0.993656}, {0., 1.36193}, {0.0210084, 
  1.44532}, {0.0420168, 1.5148}, {0.0504202, 1.5148}, {0.0882353, 
  1.41405}, {0.130252, 1.70937}, {0.172269, 2.029}, {0.176471, 
  1.7858}, {0.222689, 2.20272}, {0.226891, 2.23746}, {0.231092, 
  2.23746}, {0.239496, 1.96647}, {0.268908, 1.94562}, {0.273109, 
  1.91088}, {0.277311, 1.91088}, {0.281513, 1.94562}, {0.294118, 
  2.2861}, {0.319328, 2.26526}, {0.327731, 2.3}, {0.432773, 
  1.68157}, {0.462185, 1.86918}, {0.5, 2.00121}} *)

ListPlot[extractedData, PlotRange -> All, PlotTheme -> "Detailed"]
enter image description here

enter image description here

Apply QRMon

Load packages. (For more details see [4].)

Import["https://raw.githubusercontent.com/antononcube/\
MathematicaForPrediction/master/MonadicProgramming/\
MonadicQuantileRegression.m"]
Import["https://raw.githubusercontent.com/antononcube/\
MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

Apply the QRMon workflow within the TraceMonad:

TraceMonadUnit[QRMonUnit[extractedData]]⟹"lift data to the monad"⟹
  QRMonEchoDataSummary⟹"echo data summary"⟹
  QRMonQuantileRegression[12, 0.5]⟹"do Quantile Regression with\nB-spline basis with 12 knots"⟹
  QRMonPlot⟹"plot the data and regression curve"⟹
  QRMonEcho[Style["Tabulate QRMon steps and explanations:", Purple, Bold]]⟹"echo an explanation message"⟹
  TraceMonadEchoGrid;
enter image description here

enter image description here

References

[1] Moody Hadi and Danny Haydon, "A Perspective On Machine Learning In Credit Risk", (2018), S&P Global Market Intelligence.

[2] Andy Ross, answer of "Recovering data points from an image", (2012).

[3] Anton Antonov, "A monad for Quantile Regression workflows", (2018), MathematicaForPrediction at WordPress.

[4] Anton Antonov, "Monad code generation and extension", (2017), MathematicaForPrediction at GitHub*,https://github.com/antononcube/MathematicaForPrediction.

A monad for Quantile Regression workflows

Introduction

In this document we describe the design and implementation of a (software programming) monad for Quantile Regression workflows specification and execution. The design and implementation are done with Mathematica / Wolfram Language (WL).

What is Quantile Regression? : Assume we have a set of two dimensional points each point being a pair of an independent variable value and a dependent variable value. We want to find a curve that is a function of the independent variable that splits the points in such a way that, say, 30% of the points are above that curve. This is done with Quantile Regression, see [Wk2, CN1, AA2, AA3]. Quantile Regression is a method to estimate the variable relations for all parts of the distribution. (Not just, say, the mean of the relationships found with Least Squares Regression.)

The goal of the monad design is to make the specification of Quantile Regression workflows (relatively) easy, straightforward, by following a certain main scenario and specifying variations over that scenario. Since Quantile Regression is often compared with Least Squares Regression and some type of filtering (like, Moving Average) those functionalities should be included in the monad design scenarios.

The monad is named QRMon and it is based on the State monad package "StateMonadCodeGenerator.m", [AAp1, AA1] and the Quantile Regression package "QuantileRegression.m", [AAp4, AA2].

The data for this document is read from WL’s repository or created ad-hoc.

The monadic programming design is used as a Software Design Pattern. The QRMon monad can be also seen as a Domain Specific Language (DSL) for the specification and programming of machine learning classification workflows.

Here is an example of using the QRMon monad over heteroscedastic data::

QRMon-introduction-monad-pipeline-example-table

QRMon-introduction-monad-pipeline-example-table

QRMon-introduction-monad-pipeline-example-echo

QRMon-introduction-monad-pipeline-example-echo

The table above is produced with the package "MonadicTracing.m", [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad QRMon can be seen as a DSL. Because of this the monad pipelines made with QRMon are sometimes called "specifications".

Remark: With "regression quantile" we mean "a curve or function that is computed with Quantile Regression".

Contents description

The document has the following structure.

  • The sections "Package load" and "Data load" obtain the needed code and data.
  • The sections "Design consideration" and "Monad design" provide motivation and design decisions rationale.

  • The sections "QRMon overview" and "Monad elements" provide technical description of the QRMon monad needed to utilize it.

    • (Using a fair amount of examples.)
  • The section "Unit tests" describes the tests used in the development of the QRMon monad.
    • (The random pipelines unit tests are especially interesting.)
  • The section "Future plans" outlines future directions of development.
  • The section "Implementation notes" just says that QRMon’s development process and this document follow the ones of the classifications workflows monad ClCon, [AA6].

Remark: One can read only the sections "Introduction", "Design consideration", "Monad design", and "QRMon overview". That set of sections provide a fairly good, programming language agnostic exposition of the substance and novel ideas of this document.

The table above is produced with the package "MonadicTracing.m", [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad QRMon can be seen as a DSL. Because of this the monad pipelines made with QRMon are sometimes called "specifications".

Remark: With "regression quantile" we mean "a curve or function that is computed with Quantile Regression".

Package load

The following commands load the packages [AAp1–AAp6]:

Import["https://raw.githubusercontent.com/antononcube/\
MathematicaForPrediction/master/MonadicProgramming/\
MonadicQuantileRegression.m"]
Import["https://raw.githubusercontent.com/antononcube/\
MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

Data load

In this section we load data that is used in the rest of the document. The time series data is obtained through WL’s repository.

The data summarization and plots are done through QRMon, which in turn uses the function RecordsSummary from the package "MathematicaForPredictionUtilities.m", [AAp6].

Distribution data

The following data is generated to have [heteroscedasticity(https://en.wikipedia.org/wiki/Heteroscedasticity).

distData = 
  Table[{x, 
    Exp[-x^2] + 
     RandomVariate[
      NormalDistribution[0, .15 Sqrt[Abs[1.5 - x]/1.5]]]}, {x, -3, 
    3, .01}];
Length[distData]

(* 601 *)

QRMonUnit[distData]⟹QRMonEchoDataSummary⟹QRMonPlot;
QRMon-distData

QRMon-distData

Temperature time series

tsData = WeatherData[{"Orlando", "USA"}, "Temperature", {{2015, 1, 1}, {2018, 1, 1}, "Day"}]

QRMonUnit[tsData]⟹QRMonEchoDataSummary⟹QRMonDateListPlot;
QRMon-tsData

QRMon-tsData

Financial time series

The following data is typical for financial time series. (Note the differences with the temperature time series.)

finData = TimeSeries[FinancialData["NYSE:GE", {{2014, 1, 1}, {2018, 1, 1}, "Day"}]];

QRMonUnit[finData]⟹QRMonEchoDataSummary⟹QRMonDateListPlot;
QRMon-finData

QRMon-finData

Design considerations

The steps of the main regression workflow addressed in this document follow.

  1. Retrieving data from a data repository.

  2. Optionally, transform the data.

    1. Delete rows with missing fields.

    2. Rescale data along one or both of the axes.

    3. Apply moving average (or median, or map.)

  3. Verify assumptions of the data.

  4. Run a regression algorithm with a certain basis of functions using:

    1. Quantile Regression, or

    2. Least Squares Regression.

  5. Visualize the data and regression functions.

  6. If the regression functions fit is not satisfactory go to step 4.

  7. Utilize the found regression functions to compute:

    1. outliers,

    2. local extrema,

    3. approximation or fitting errors,

    4. conditional density distributions,

    5. time series simulations.

The following flow-chart corresponds to the list of steps above.

Quantile-regression-workflow-extended

Quantile-regression-workflow-extended

In order to address:

  • the introduction of new elements in regression workflows,

  • workflows elements variability, and

  • workflows iterative changes and refining,

it is beneficial to have a DSL for regression workflows. We choose to make such a DSL through a functional programming monad, [Wk1, AA1].

Here is a quote from [Wk1] that fairly well describes why we choose to make a classification workflow monad and hints on the desired properties of such a monad.

[…] The monad represents computations with a sequential structure: a monad defines what it means to chain operations together. This enables the programmer to build pipelines that process data in a series of steps (i.e. a series of actions applied to the data), in which each action is decorated with the additional processing rules provided by the monad. […] Monads allow a programming style where programs are written by putting together highly composable parts, combining in flexible ways the possible actions that can work on a particular type of data. […]

Remark: Note that quote from [Wk1] refers to chained monadic operations as "pipelines". We use the terms "monad pipeline" and "pipeline" below.

Monad design

The monad we consider is designed to speed-up the programming of quantile regression workflows outlined in the previous section. The monad is named QRMon for "Quantile Regression Monad".

We want to be able to construct monad pipelines of the general form:

QRMon-formula-1

QRMon-formula-1

QRMon is based on the State monad, [Wk1, AA1], so the monad pipeline form (1) has the following more specific form:

QRMon-formula-2

QRMon-formula-2

This means that some monad operations will not just change the pipeline value but they will also change the pipeline context.

In the monad pipelines of QRMon we store different objects in the contexts for at least one of the following two reasons.

  1. The object will be needed later on in the pipeline, or

  2. The object is (relatively) hard to compute.

Such objects are transformed data, regression functions, and outliers.

Let us list the desired properties of the monad.

  • Rapid specification of non-trivial quantile regression workflows.

  • The monad works with time series, numerical matrices, and numerical vectors.

  • The pipeline values can be of different types. Most monad functions modify the pipeline value; some modify the context; some just echo results.

  • The monad can do quantile regression with B-Splines bases, quantile regression fit and least squares fit with specified bases of functions.

  • The monad allows of cursory examination and summarization of the data.

  • It is easy to obtain the pipeline value, context, and different context objects for manipulation outside of the monad.

  • It is easy to plot different combinations of data, regression functions, outliers, approximation errors, etc.

The QRMon components and their interactions are fairly simple.

The main QRMon operations implicitly put in the context or utilize from the context the following objects:

  • (time series) data,

  • regression functions,

  • outliers and outlier regression functions.

Note the that the monadic set of types of QRMon pipeline values is fairly heterogenous and certain awareness of "the current pipeline value" is assumed when composing QRMon pipelines.

Obviously, we can put in the context any object through the generic operations of the State monad of the package "StateMonadGenerator.m", [AAp1].

QRMon overview

When using a monad we lift certain data into the "monad space", using monad’s operations we navigate computations in that space, and at some point we take results from it.

With the approach taken in this document the "lifting" into the QRMon monad is done with the function QRMonUnit. Results from the monad can be obtained with the functions QRMonTakeValue, QRMonContext, or with the other QRMon functions with the prefix "QRMonTake" (see below.)

Here is a corresponding diagram of a generic computation with the QRMon monad:

QRMon-pipeline

QRMon-pipeline

Remark: It is a good idea to compare the diagram with formulas (1) and (2).

Let us examine a concrete QRMon pipeline that corresponds to the diagram above. In the following table each pipeline operation is combined together with a short explanation and the context keys after its execution.

Here is the output of the pipeline:

The QRMon functions are separated into four groups:

  • operations,

  • setters and droppers,

  • takers,

  • State Monad generic functions.

An overview of the those functions is given in the tables in next two sub-sections. The next section, "Monad elements", gives details and examples for the usage of the QRMon operations.

Monad functions interaction with the pipeline value and context

The following table gives an overview the interaction of the QRMon monad functions with the pipeline value and context.

QRMon-monad-functions-overview-table

QRMon-monad-functions-overview-table

The following table shows the functions that are function synonyms or short-cuts.

QRMon-monad-functions-shortcuts-table

QRMon-monad-functions-shortcuts-table

State monad functions

Here are the QRMon State Monad functions (generated using the prefix "QRMon", [AAp1, AA1]):

QRMon-StMon-functions-overview-table

QRMon-StMon-functions-overview-table

Monad elements

In this section we show that QRMon has all of the properties listed in the previous section.

The monad head

The monad head is QRMon. Anything wrapped in QRMon can serve as monad’s pipeline value. It is better though to use the constructor QRMonUnit. (Which adheres to the definition in [Wk1].)

QRMon[{{1, 223}, {2, 323}}, <||>]⟹QRMonEchoDataSummary;
The-monad-head-output

The-monad-head-output

Lifting data to the monad

The function lifting the data into the monad QRMon is QRMonUnit.

The lifting to the monad marks the beginning of the monadic pipeline. It can be done with data or without data. Examples follow.

QRMonUnit[distData]⟹QRMonEchoDataSummary;
Lifting-data-to-the-monad-output

Lifting-data-to-the-monad-output

QRMonUnit[]⟹QRMonSetData[distData]⟹QRMonEchoDataSummary;
Lifting-data-to-the-monad-output

Lifting-data-to-the-monad-output

(See the sub-section "Setters, droppers, and takers" for more details of setting and taking values in QRMon contexts.)

Currently the monad can deal with data in the following forms:

  • time series,

  • numerical vectors,

  • numerical matrices of rank two.

When the data lifted to the monad is a numerical vector vec it is assumed that vec has to become the second column of a "time series" matrix; the first column is derived with Range[Length[vec]] .

Generally, WL makes it easy to extract columns datasets order to obtain numerical matrices, so datasets are not currently supported in QRMon.

Quantile regression with B-splines

This computes quantile regression with B-spline basis over 12 regularly spaced knots. (Using Linear Programming algorithms; see [AA2] for details.)

QRMonUnit[distData]⟹
  QRMonQuantileRegression[12]⟹
  QRMonPlot;
Quantile-regression-with-B-splines-output-1

Quantile-regression-with-B-splines-output-1

The monad function QRMonQuantileRegression has the same options as QuantileRegression. (The default value for option Method is different, since using "CLP" is generally faster.)

Options[QRMonQuantileRegression]

(* {InterpolationOrder -> 3, Method -> {LinearProgramming, Method -> "CLP"}} *)

Let us compute regression using a list of particular knots, specified quantiles, and the method "InteriorPoint" (instead of the Linear Programming library CLP):

p =
  QRMonUnit[distData]⟹
   QRMonQuantileRegression[{-3, -2, 1, 0, 1, 1.5, 2.5, 3}, Range[0.1, 0.9, 0.2], Method -> {LinearProgramming, Method -> "InteriorPoint"}]⟹
   QRMonPlot;
Quantile-regression-with-B-splines-output-2

Quantile-regression-with-B-splines-output-2

Remark: As it was mentioned above the function QRMonRegression is a synonym of QRMonQuantileRegression.

The fit functions can be extracted from the monad with QRMonTakeRegressionFunctions, which gives an association of quantiles and pure functions.

ListPlot[# /@ distData[[All, 1]]] & /@ (p⟹QRMonTakeRegressionFunctions)
Quantile-regression-with-B-splines-output-3

Quantile-regression-with-B-splines-output-3

Quantile regression fit and Least squares fit

Instead of using a B-spline basis of functions we can compute a fit with our own basis of functions.

Here is a basis functions:

bFuncs = Table[PDF[NormalDistribution[m, 1], x], {m, Min[distData[[All, 1]]], Max[distData[[All, 1]]], 1}];
Plot[bFuncs, {x, Min[distData[[All, 1]]], Max[distData[[All, 1]]]}, 
 PlotRange -> All, PlotTheme -> "Scientific"]
Quantile-regression-fit-and-Least-squares-fit-basis

Quantile-regression-fit-and-Least-squares-fit-basis

Here we do a Quantile Regression fit, a Least Squares fit, and plot the results:

p =
  QRMonUnit[distData]⟹
   QRMonQuantileRegressionFit[bFuncs]⟹
   QRMonLeastSquaresFit[bFuncs]⟹
   QRMonPlot;
   
Quantile-regression-fit-and-Least-squares-fit-output-1

Quantile-regression-fit-and-Least-squares-fit-output-1

Remark: The functions "QRMon*Fit" should generally have a second argument for the symbol of the basis functions independent variable. Often that symbol can be omitted and implied. (Which can be seen in the pipeline above.)

Remark: As it was mentioned above the function QRMonRegressionFit is a synonym of QRMonQuantileRegressionFit and QRMonFit is a synonym of QRMonLeastSquaresFit.

As it was pointed out in the previous sub-section, the fit functions can be extracted from the monad with QRMonTakeRegressionFunctions. Here the keys of the returned/taken association consist of quantiles and "mean" since we applied both Quantile Regression and Least Squares Regression.

ListPlot[# /@ distData[[All, 1]]] & /@ (p⟹QRMonTakeRegressionFunctions)
Quantile-regression-fit-and-Least-squares-fit-output-2

Quantile-regression-fit-and-Least-squares-fit-output-2

Default basis to fit (using Chebyshev polynomials)

One of the main advantages of using the function QuanileRegression of the package [AAp4] is that the functions used to do the regression with are specified with a few numeric parameters. (Most often only the number of knots is sufficient.) This is achieved by using a basis of B-spline functions of a certain interpolation order.

We want similar behaviour for Quantile Regression fitting we need to select a certain well known basis with certain desirable properties. Such basis is given by Chebyshev polynomials of first kind [Wk3]. Chebyshev polynomials bases can be easily generated in Mathematica with the functions ChebyshevT or ChebyshevU.

Here is an application of fitting with a basis of 12 Chebyshev polynomials of first kind:

QRMonUnit[distData]⟹
  QRMonQuantileRegressionFit[12]⟹
  QRMonLeastSquaresFit[12]⟹
  QRMonPlot;
Default-basis-to-fit-output-1-and-2

Default-basis-to-fit-output-1-and-2

The code above is equivalent to the following code:

bfuncs = Table[ChebyshevT[i, Rescale[x, MinMax[distData[[All, 1]]], {-0.95, 0.95}]], {i, 0, 12}];

p =
  QRMonUnit[distData]⟹
   QRMonQuantileRegressionFit[bfuncs]⟹
   QRMonLeastSquaresFit[bfuncs]⟹
   QRMonPlot;
Default-basis-to-fit-output-1-and-2

Default-basis-to-fit-output-1-and-2

The shrinking of the ChebyshevT domain seen in the definitions of bfuncs is done in order to prevent approximation error effects at the ends of the data domain. The following code uses the ChebyshevT domain { − 1, 1} instead of the domain { − 0.95, 0.95} used above.

QRMonUnit[distData]⟹
  QRMonQuantileRegressionFit[{4, {-1, 1}}]⟹
  QRMonPlot;
Default-basis-to-fit-output-3

Default-basis-to-fit-output-3

Regression functions evaluation

The computed quantile and least squares regression functions can be evaluated with the monad function QRMonEvaluate.

Evaluation for a given value of the independent variable:

p⟹QRMonEvaluate[0.12]⟹QRMonTakeValue

(* <|0.25 -> 0.930402, 0.5 -> 1.01411, 0.75 -> 1.08075, "mean" -> 0.996963|> *)

Evaluation for a vector of values:

p⟹QRMonEvaluate[Range[-1, 1, 0.5]]⟹QRMonTakeValue

(* <|0.25 -> {0.258241, 0.677461, 0.943299, 0.703812, 0.293741}, 
     0.5 -> {0.350025, 0.768617, 1.02311, 0.807879, 0.374545}, 
     0.75 -> {0.499338, 0.912183, 1.10325, 0.856729, 0.431227}, 
     "mean" -> {0.355353, 0.776006, 1.01118, 0.783304, 0.363172}|> *)

Evaluation for complicated lists of numbers:

p⟹QRMonEvaluate[{0, 1, {1.5, 1.4}}]⟹QRMonTakeValue

(* <|0.25 -> {0.943299, 0.293741, {0.0762883, 0.10759}}, 
     0.5 -> {1.02311, 0.374545, {0.103386, 0.139142}}, 
     0.75 -> {1.10325, 0.431227, {0.133755, 0.177161}}, 
     "mean" -> {1.01118, 0.363172, {0.107989, 0.142021}}|> *)
   

The obtained values can be used to compute estimates of the distributions of the dependent variable. See the sub-sections "Estimating conditional distributions" and "Dependent variable simulation".

Errors and error plots

Here with "errors" we mean the differences between data’s dependent variable values and the corresponding values calculated with the fitted regression curves.

In the pipeline below we compute couple of regression quantiles, plot them together with the data, we plot the errors, compute the errors, and summarize them.

QRMonUnit[finData]⟹
  QRMonQuantileRegression[10, {0.5, 0.1}]⟹
  QRMonDateListPlot[Joined -> False]⟹
  QRMonErrorPlots["DateListPlot" -> True, Joined -> False]⟹
  QRMonErrors⟹
  QRMonEchoFunctionValue["Errors summary:", RecordsSummary[#[[All, 2]]] & /@ # &];
Errors-and-error-plots-output-1

Errors-and-error-plots-output-1

Each of the functions QRMonErrors and QRMonErrorPlots computes the errors. (That computation is considered cheap.)

Finding outliers

Finding outliers can be done with the function QRMonOultiers. The outliers found by QRMonOutliers are simply points that below or above certain regression quantile curves, for example, the ones corresponding to 0.02 and 0.98.

Here is an example:

p =
  QRMonUnit[distData]⟹
   QRMonQuantileRegression[6, {0.02, 0.98}]⟹
   QRMonOutliers⟹
   QRMonEchoValue⟹
   QRMonOutliersPlot;
Finding-outliers-output-1

Finding-outliers-output-1

The function QRMonOutliers puts in the context values for the keys "outliers" and "outlierRegressionFunctions". The former is for the found outliers, the latter is for the functions corresponding to the used regression quantiles.

Keys[p⟹QRMonTakeContext]

(* {"data", "regressionFunctions", "outliers", "outlierRegressionFunctions"} *)

Here are the corresponding quantiles of the plot above:

Keys[p⟹QRMonTakeOutlierRegressionFunctions]

(* {0.02, 0.98} *)

The control of the outliers computation is done though the arguments and options of QRMonQuantileRegression (or the rest of the regression calculation functions.)

If only one regression quantile is found in the context and the corresponding quantile is less than 0.5 then QRMonOutliers finds only bottom outliers. If only one regression quantile is found in the context and the corresponding quantile is greater than 0.5 then QRMonOutliers finds only top outliers.

Here is an example for finding only the top outliers:

QRMonUnit[finData]⟹
  QRMonQuantileRegression[5, 0.8]⟹
  QRMonOutliers⟹
  QRMonEchoFunctionContext["outlier quantiles:", Keys[#outlierRegressionFunctions] &]⟹
  QRMonOutliersPlot["DateListPlot" -> True];
  
Finding-outliers-output-2

Finding-outliers-output-2

Plotting outliers

The function QRMonOutliersPlot makes an outliers plot. If the outliers are not in the context then QRMonOutliersPlot calls QRMonOutliers first.

Here are the options of QRMonOutliersPlot:

Options[QRMonOutliersPlot]

(* {"Echo" -> True, "DateListPlot" -> False, ListPlot -> {Joined -> False}, Plot -> {}} *)

The default behavior is to echo the plot. That can be suppressed with the option "Echo".

QRMonOutliersPlot utilizes combines with Show two plots:

  • one with ListPlot (or DateListPlot) for the data and the outliers,

  • the other with Plot for the regression quantiles used to find the outliers.

That is why separate lists of options can be given to manipulate those two plots. The option DateListPlot can be used make plots with date or time axes.

QRMonUnit[tsData]⟹
 QRMonQuantileRegression[12, {0.01, 0.99}]⟹
 QRMonOutliersPlot[
  "Echo" -> False,
  "DateListPlot" -> True,
  ListPlot -> {PlotStyle -> {Green, {PointSize[0.02], 
       Red}, {PointSize[0.02], Blue}}, Joined -> False, 
    PlotTheme -> "Grid"},
  Plot -> {PlotStyle -> Orange}]⟹
 QRMonTakeValue
 
Plotting-outliers-output-2

Plotting-outliers-output-2

Estimating conditional distributions

Consider the following problem:

How to estimate the conditional density of the dependent variable given a value of the conditioning independent variable?

(In other words, find the distribution of the y-values for a given, fixed x-value.)

The solution of this problem using Quantile Regression is discussed in detail in [PG1] and [AA4].

Finding a solution for this problem can be seen as a primary motivation to develop Quantile Regression algorithms.

The following pipeline (i) computes and plots a set of five regression quantiles and (ii) then using the found regression quantiles computes and plots the conditional distributions for two focus points (−2 and 1.)

QRMonUnit[distData]⟹
  QRMonQuantileRegression[6, 
   Range[0.1, 0.9, 0.2]]⟹
  QRMonPlot[GridLines -> {{-2, 1}, None}]⟹
  QRMonConditionalCDF[{-2, 1}]⟹
  QRMonConditionalCDFPlot;
Estimating-conditional-distributions-output-1

Estimating-conditional-distributions-output-1

Moving average, moving median, and moving map

Fairly often it is a good idea for a given time series to apply filter functions like Moving Average or Moving Median. We might want to:

  • visualize the obtained transformed data,

  • do regression over the transformed data,

  • compare with regression curves over the original data.

For these reasons QRMon has the functions QRMonMovingAverage, QRMonMovingMedian, and QRMonMovingMap that correspond to the built-in functions MovingAverage, MovingMedian, and MovingMap.

Here is an example:

QRMonUnit[tsData]⟹
  QRMonDateListPlot[ImageSize -> Small]⟹
  QRMonMovingAverage[20]⟹
  QRMonEchoFunctionValue["Moving avg: ", DateListPlot[#, ImageSize -> Small] &]⟹
  QRMonMovingMap[Mean, Quantity[20, "Days"]]⟹
  QRMonEchoFunctionValue["Moving map: ", DateListPlot[#, ImageSize -> Small] &];
Moving-average-moving-median-and-moving-map-output-1

Moving-average-moving-median-and-moving-map-output-1

Dependent variable simulation

Consider the problem of making a time series that is a simulation of a process given with a known time series.

More formally,

  • we are given a time-axis grid (regular or irregular),

  • we consider each grid node to correspond to a random variable,

  • we want to generate time series based on the empirical CDF’s of the random variables that correspond to the grid nodes.

The formulation of the problem hints to an (almost) straightforward implementation using Quantile Regression.

p = QRMonUnit[tsData]⟹QRMonQuantileRegression[30, Join[{0.01}, Range[0.1, 0.9, 0.1], {0.99}]];

tsNew =
  p⟹
   QRMonSimulate[1000]⟹
   QRMonTakeValue;

opts = {ImageSize -> Medium, PlotTheme -> "Detailed"};
GraphicsGrid[{{DateListPlot[tsData, PlotLabel -> "Actual", opts],
    DateListPlot[tsNew, PlotLabel -> "Simulated", opts]}}]
Dependent-variable-simulation-output-1

Dependent-variable-simulation-output-1

Finding local extrema in noisy data

Using regression fitting — and Quantile Regression in particular — we can easily construct semi-symbolic algorithms for finding local extrema in noisy time series data; see [AA5]. The QRMon function with such an algorithm is QRMonLocalExtrema.

In brief, the algorithm steps are as follows. (For more details see [AA5].)

  1. Fit a polynomial through the data.

  2. Find the local extrema of the fitted polynomial. (We will call them fit estimated extrema.)

  3. Around each of the fit estimated extrema find the most extreme point in the data by a nearest neighbors search (by using Nearest).

The function QRMonLocalExtrema uses the regression quantiles previously found in the monad pipeline (and stored in the context.) The bottom regression quantile is used for finding local minima, the top regression quantile is used for finding the local maxima.

An example of finding local extrema follows.

QRMonUnit[TimeSeriesWindow[tsData, {{2015, 1, 1}, {2018, 12, 31}}]]⟹
  QRMonQuantileRegression[10, {0.05, 0.95}]⟹
  QRMonDateListPlot[Joined -> False, PlotTheme -> "Scientific"]⟹
  QRMonLocalExtrema["NumberOfProximityPoints" -> 100]⟹
  QRMonEchoValue⟹
  QRMonAddToContext⟹
  QRMonEchoFunctionContext[
   DateListPlot[{#localMinima, #localMaxima, #data}, 
     PlotStyle -> {PointSize[0.015], PointSize[0.015], Gray}, 
     Joined -> False, 
     PlotLegends -> {"localMinima", "localMaxima", "data"}, 
     PlotTheme -> "Scientific"] &];
Finding-local-extrema-in-noisy-data-output-1

Finding-local-extrema-in-noisy-data-output-1

Note that in the pipeline above in order to plot the data and local extrema together some additional steps are needed. The result of QRMonLocalExtrema becomes the pipeline value; that pipeline value is displayed with QRMonEchoValue, and stored in the context with QRMonAddToContext. If the pipeline value is an association — which is the case here — the monad function QRMonAddToContext joins that association with the context association. In this case this means that we will have key-value elements in the context for "localMinima" and "localMaxima". The date list plot at the end of the pipeline uses values of those context keys (together with the value for "data".)

Setters, droppers, and takers

The values from the monad context can be set, obtained, or dropped with the corresponding "setter", "dropper", and "taker" functions as summarized in a previous section.

For example:

p = QRMonUnit[distData]⟹QRMonQuantileRegressionFit[2];

p⟹QRMonTakeRegressionFunctions

(* <|0.25 -> (0.0191185 + 0.00669159 #1 + 3.05509*10^-14 #1^2 &), 
     0.5 -> (0.191408 + 9.4728*10^-14 #1 + 3.02272*10^-14 #1^2 &), 
     0.75 -> (0.563422 + 3.8079*10^-11 #1 + 7.63637*10^-14 #1^2 &)|> *)
     

If other values are put in the context they can be obtained through the (generic) function QRMonTakeContext, [AAp1]:

p = QRMonUnit[RandomReal[1, {2, 2}]]⟹QRMonAddToContext["data"];

(p⟹QRMonTakeContext)["data"]

(* {{0.608789, 0.741599}, {0.877074, 0.861554}} *)

Another generic function from [AAp1] is QRMonTakeValue (used many times above.)

Here is an example of the "data dropper" QRMonDropData:

p⟹QRMonDropData⟹QRMonTakeContext

(* <||> *)

(The "droppers" simply use the state monad function QRMonDropFromContext, [AAp1]. For example, QRMonDropData is equivalent to QRMonDropFromContext["data"].)

Unit tests

The development of QRMon was done with two types of unit tests: (i) directly specified tests, [AAp7], and (ii) tests based on randomly generated pipelines, [AA8].

The unit test package should be further extended in order to provide better coverage of the functionalities and illustrate — and postulate — pipeline behavior.

Directly specified tests

Here we run the unit tests file "MonadicQuantileRegression-Unit-Tests.wlt", [AAp7]:

AbsoluteTiming[
 testObject = TestReport["~/MathematicaForPrediction/UnitTests/MonadicQuantileRegression-Unit-Tests.wlt"]
]
Unit-tests-output-1

Unit-tests-output-1

The natural language derived test ID’s should give a fairly good idea of the functionalities covered in [AAp3].

Values[Map[#["TestID"] &, testObject["TestResults"]]]

(* {"LoadPackage", "GenerateData", "QuantileRegression-1", \
"QuantileRegression-2", "QuantileRegression-3", \
"QuantileRegression-and-Fit-1", "Fit-and-QuantileRegression-1", \
"QuantileRegressionFit-and-Fit-1", "Fit-and-QuantileRegressionFit-1", \
"Outliers-1", "Outliers-2", "GridSequence-1", "BandsSequence-1", \
"ConditionalCDF-1", "Evaluate-1", "Evaluate-2", "Evaluate-3", \
"Simulate-1", "Simulate-2", "Simulate-3"} *)

Random pipelines tests

Since the monad QRMon is a DSL it is natural to test it with a large number of randomly generated "sentences" of that DSL. For the QRMon DSL the sentences are QRMon pipelines. The package "MonadicQuantileRegressionRandomPipelinesUnitTests.m", [AAp8], has functions for generation of QRMon random pipelines and running them as verification tests. A short example follows.

Generate pipelines:

SeedRandom[234]
pipelines = MakeQRMonRandomPipelines[100];
Length[pipelines]

(* 100 *)

Here is a sample of the generated pipelines:

(* 
Block[{DoubleLongRightArrow, pipelines = RandomSample[pipelines, 6]}, 
 Clear[DoubleLongRightArrow];
 pipelines = pipelines /. {_TemporalData -> "tsData", _?MatrixQ -> "distData"};
 GridTableForm[Map[List@ToString[DoubleLongRightArrow @@ #, FormatType -> StandardForm] &, pipelines], TableHeadings -> {"pipeline"}]
 ]
AutoCollapse[] *)
Unit-tests-random-pipelines-sample

Unit-tests-random-pipelines-sample

Here we run the pipelines as unit tests:

AbsoluteTiming[
 res = TestRunQRMonPipelines[pipelines, "Echo" -> False];
]

From the test report results we see that a dozen tests failed with messages, all of the rest passed.

rpTRObj = TestReport[res]

(The message failures, of course, have to be examined — some bugs were found in that way. Currently the actual test messages are expected.)

Future plans

Workflow operations

A list of possible, additional workflow operations and improvements follows.

  • Certain improvements can be done over the specification of the different plot options.

  • It will be useful to develop a function for automatic finding of over-fitting parameters.

  • The time series simulation should be done by aggregation of similar time intervals.

    • For example, for time series with span several years, for each month name is made Quantile Regression simulation and the results are spliced to obtain a one year simulation.
  • If the time series is represented as a sequence of categorical values, then the time series simulation can use Bayesian probabilities derived from sub-sequences.
    • QRMon already has functions that facilitate that, QRMonGridSequence and QRMonBandsSequence.

Conversational agent

Using the packages [AAp10, AAp11] we can generate QRMon pipelines with natural commands. The plan is to develop and document those functionalities further.

Here is an example of a pipeline constructed with natural language commands:

QRMonUnit[distData]⟹
  ToQRMonPipelineFunction["show data summary"]⟹
  ToQRMonPipelineFunction["calculate quantile regression for quantiles 0.2, 0.8 and with 40 knots"]⟹
  ToQRMonPipelineFunction["plot"];
Future-plans-conversational-agent-output-1

Future-plans-conversational-agent-output-1

Implementation notes

The implementation methodology of the QRMon monad packages [AAp3, AAp8] followed the methodology created for the ClCon monad package [AAp9, AA6]. Similarly, this document closely follows the structure and exposition of the ClCon monad document "A monad for classification workflows", [AA6].

A lot of the functionalities and signatures of QRMon were designed and programed through considerations of natural language commands specifications given to a specialized conversational agent. (As discussed in the previous section.)

References

Packages

[AAp1] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

[AAp2] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

[AAp3] Anton Antonov, Monadic Quantile Regression Mathematica package, (2018), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicQuantileRegression.m.

[AAp4] Anton Antonov, Quantile regression Mathematica package, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/QuantileRegression.m .

[AAp5] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

[AAp6] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

[AAp7] Anton Antonov, Monadic Quantile Regression unit tests, (2018), MathematicaVsR at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicQuantileRegression-Unit-Tests.wlt .

[AAp8] Anton Antonov, Monadic Quantile Regression random pipelines Mathematica unit tests, (2018), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicQuantileRegressionRandomPipelinesUnitTests.m .

[AAp9] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

ConversationalAgents Packages

[AAp10] Anton Antonov, Time series workflows grammar in EBNF, (2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

[AAp11] Anton Antonov, QRMon translator Mathematica package,(2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

MathematicaForPrediction articles

[AA1] Anton Antonov, "Monad code generation and extension", (2017), MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction.

[AA2] Anton Antonov, "Quantile regression through linear programming", (2013), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2013/12/16/quantile-regression-through-linear-programming/ .

[AA3] Anton Antonov, "Quantile regression with B-splines", (2014), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2014/01/01/quantile-regression-with-b-splines/ .

[AA4] Anton Antonov, "Estimation of conditional density distributions", (2014), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2014/01/13/estimation-of-conditional-density-distributions/ .

[AA5] Anton Antonov, "Finding local extrema in noisy data using Quantile Regression", (2015), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2015/09/27/finding-local-extrema-in-noisy-data-using-quantile-regression/ .

[AA6] Anton Antonov, "A monad for classification workflows", (2018), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2018/05/15/a-monad-for-classification-workflows/ .

Other

[Wk1] Wikipedia entry, Monad, URL: https://en.wikipedia.org/wiki/Monad_(functional_programming) .

[Wk2] Wikipedia entry, Quantile Regression, URL: https://en.wikipedia.org/wiki/Quantile_regression .

[Wk3] Wikipedia entry, Chebyshev polynomials, URL: https://en.wikipedia.org/wiki/Chebyshev_polynomials .

[CN1] Brian S. Code and Barry R. Noon, "A gentle introduction to quantile regression for ecologists", (2003). Frontiers in Ecology and the Environment. 1 (8): 412[Dash]420. doi:10.2307/3868138. URL: http://www.econ.uiuc.edu/~roger/research/rq/QReco.pdf .

[PS1] Patrick Scheibe, Mathematica (Wolfram Language) support for IntelliJ IDEA, (2013-2018), Mathematica-IntelliJ-Plugin at GitHub. URL: https://github.com/halirutan/Mathematica-IntelliJ-Plugin .

[RK1] Roger Koenker, Quantile Regression, ‪Cambridge University Press, 2005‬.

Mathematica-vs-R: Deep learning examples

Introduction

This MathematicaVsR at GitHub project is for the comparison of the Deep Learning functionalities in R/RStudio and Mathematica/Wolfram Language (WL).

The project is aimed to mirror and aid the talk "Deep Learning series (session 2)" of the meetup Orlando Machine Learning and Data Science.

The focus of the talk is R and Keras, so the project structure is strongly influenced by the content of the book Deep learning with R, [1], and the corresponding Rmd notebooks, [2].

Some of Mathematica’s notebooks repeat the material in [2]. Some are original versions.

WL’s Neural Nets framework and abilities are fairly well described in the reference page "Neural Networks in the Wolfram Language overview", [4], and the webinar talks [5].

The corresponding documentation pages [3] (R) and [6] (WL) can be used for a very fruitful comparison of features and abilities.

Remark: With "deep learning with R" here we mean "Keras with R".

Remark: An alternative to R/Keras and Mathematica/MXNet is the library H2O (that has interfaces to Java, Python, R, Scala.) See project’s directory R.H2O for examples.

The presentation

The big picture

Deep learning can be used for both supervised and unsupervised learning. In this project we concentrate on supervised learning.

The following diagram outlines the general, simple classification workflow we have in mind.

simple_classification_workflow

Here is a corresponding classification monadic pipeline in Mathematica:

monadic_pipeline

monadic_pipeline

Code samples

R-Keras uses monadic pipelines through the library magrittr. For example:

model <- keras_model_sequential() 
model %>% 
  layer_dense(units = 256, activation = 'relu', input_shape = c(784)) %>% 
  layer_dropout(rate = 0.4) %>% 
  layer_dense(units = 128, activation = 'relu') %>%
  layer_dropout(rate = 0.3) %>%
  layer_dense(units = 10, activation = 'softmax')

The corresponding Mathematica command is:

model =
 NetChain[{
   LinearLayer[256, "Input" -> 784],
   ElementwiseLayer[Ramp],            
   DropoutLayer[0.4],
   LinearLayer[128],
   ElementwiseLayer[Ramp],            
   DropoutLayer[0.3],
   LinearLayer[10]
 }]

Comparison

Installation

  • Mathematica

  • The neural networks framework comes with Mathematica. (No additional installation required.)

  • R

  • Pretty straightforward using the directions in [3]. (A short list.)

  • Some additional Python installation is required.

Simple neural network classifier over MNIST data

Vector classification

TBD…

Categorical classification

TBD…

Regression

Encoders and decoders

The Mathematica encoders (for neural networks and generally for machine learning tasks) are very well designed and with a very advanced development.

The encoders in R-Keras are fairly useful but not was advanced as those in Mathematica.

[TBD: Encoder correspondence…]

Dealing with over-fitting

Repositories of pre-trained models

Documentation

References

[1] F. Chollet, J. J. Allaire, Deep learning with R, (2018).

[2] J. J. Allaire, Deep Learing with R notebooks, (2018), GitHub.

[3] RStudio, Keras reference.

[4] Wolfram Research, "Neural Networks in the Wolfram Language overview".

[5] Wolfram Research, "Machine Learning Webinar Series".

[6] Wolfram Research, "Neural Networks guide".