WirVsVirus 2020 hackathon participation

Introduction

Last weekend – 2020-03-20 ÷ 2020-03-22 – I participated in the (Germany-centric) hackathon WirVsVirus. (I friend of mine who lives in Germany asked me to team up and sign up.)

Our idea proposal was accepted, listed in the dedicated overview table (see item 806). The title of our hackathon project is:

“Geo-spatial-temporal Economic Model for COVID-19 Propagation and Management in Germany”

Nearly a dozen of people enlisted to help. (We communicated through Slack.)

13dzfagts8105

Multiple people helped with the discussion of ideas, directions where to find data, with actual data gathering, and related documented analysis. Of course, just discussing the proposed solutions was already a great help!

What was accomplished

Work plans

The following mind-map reflects pretty well what was planned and done:

15n5cjaej10q8

There is also a related org-mode file with the work plan.

Data

I obtained Germany city data with Mathematica’s build-in functions and used it to heuristically derive a traveling patterns graph, [AA1].

Here is the data:

Here is Geo-histogram of that data:

0t08vw1kjdzbc

We considered a fair amount of other data. But because of the time limitations of the hackathon we had to use only the one above.

Single-site models

During the development phase I used the model SEI2R, but since we wanted to have a “geo-spatial-temporal epidemiological economics model” I productized the implementation of SEI2HR-Econ, [AAp1].

Here are the stocks, rates, and equations of SEI2HR-Econ:

0tbp6de6zdez0

Multi-site SEI2R (SEI2HR-Econ) over a hexagonal grid graph

I managed to follow through with a large part of the work plan for the hackathon and make multi-site scaled model that “follows the money”, [AA1]. Here is a diagram that shows the travelling patterns graph and solutions at one of the nodes:

1vnygv6t7chgg

Here is (a snapshot of) an interactive interface for studying and investigating the solution results:

1pgmngb4uyuzb

For more details see the notebook [AA1]. Different parameters can be set in the “Parameters” section. Especially of interest are the quarantine related parameters: start, duration, effect on contact rates and traffic patterns.

I also put in that notebook code for exporting simulations results and programmed visualization routines in R, [AA2]. (In order other members of team to be able to explore the results.)

References

[DP1] 47_wirtschaftliche Auswirkung_Geo-spatial-temp-econ-modell, DevPost.

[WRI1] Wolfram Research, Inc., Germany city data records, (2020), SystemModeling at GitHub.

[AA1] Anton Antonov, “WirVsVirus hackathon multi-site SEI2R over a hexagonal grid graph”, (2020), SystemModeling at GitHub.

[AA2] Anton Antonov, “WirVsVirus-Hackathon in R”, (2020), SystemModeling at GitHub.

[AAp1] Anton Antonov, “Epidemiology models Mathematica package”, (2020), SystemsModeling at GitHub.

Scaling of epidemiology models with multi-site compartments

Version 1.0

Introduction

In this notebook we describe and exemplify an algorithm that allows the specification and execution geo-spatial-temporal simulations of infectious disease spreads. (Concrete implementations and examples are given.)

The assumptions of the typical compartmental epidemiological models do not apply for countries or cities that are non-uniformly populated. (For example, China, USA, Russia.) There is a need to derive epidemiological models that take into account the non-uniform distribution of populations and related traveling patterns within the area of interest.

Here is a visual aid (made with a random graph over the 30 largest cities of China):

1acjs15vamk0b
1acjs15vamk0b

In this notebook we show how to extend core, single-site epidemiological models into larger models for making spatial-temporal simulations. In the explanations and examples we use SEI2R, [AA2, AAp1], as a core epidemiological model, but other models can be adopted if they adhere to the model data structure of the package “EpidemiologyModels.m”, [AAp1].

From our experiments with we believe that the proposed multi-site extension algorithm gives a modeling ingredient that is hard emulate by other means within single-site models.

Definitions

Single-site: A geographical location (city, neighbourhood, campus) for which the assumptions of the classical compartmental epidemiological models hold.

Single site epidemiological model: A compartmental epidemiological model for a single site. Such model has a system of Ordinary Differential Equations (ODE’s) and site dependent initial conditions.

Multi-site area: An area comprised of multiple single sites with known traveling patterns between them. The area has a directed graph G with nodes that correspond to the sites and a positive matrix \text{\textit{tpm}}(G) for the traveling patterns between the sites.

Problem definition: Given (i) a single site epidemiological model M, (ii) a graph G connecting multiple sites, and (iii) a traveling patterns matrix \text{\textit{tpm}}(G) between the nodes of G derive an epidemiological model S(M,\text{\textit{tpm}}(G)) that simulates more adequately viral decease propagation over G.

Multi-Site Epidemiological Model Extension Algorithm (MSEMEA): An algorithm that derives from a given single site epidemiological model and multi-site area an epidemiological model that can be used to simulate the geo-spatial-temporal epidemics and infectious disease spreads. (The description of MSEMEA is the main message of this notebook.)

Load packages

The epidemiological models framework used in this notebook is implemented with the packages [AAp1, AAp2, AA3]; the interactive plots functions are from the package [AAp4].

Notebook structure

The section “General algorithm description” gives rationale and conceptual steps of MSEMEA.

The next two sections of the notebook follow the procedure outline using the SEI2R model as M, a simple graph with two nodes as G, and both constant and time-dependent matrices for \text{\textit{tpm}}(G).

The section “Constant traveling patterns over a grid graph” presents an important test case with a grid graph that we use to test and build confidence in MSEMEA. The sub-section “Observations” is especially of interest.

The section “Time-dependent traveling patterns over a random graph” presents a nearly “real life” application of MSEMEA using a random graph and a time dependent travelling patterns matrix.

The section “Money from lost productivity” shows how to track the money losses across the sites.

The last section “Future plans” outlines envisioned (immediate) extensions work presented in this notebook.

General algorithm description

In this section we describe a modeling approach that uses different mathematical modeling approaches for (i) the multi-site travelling patterns and (ii) the single-site disease spread interactions, and then (iii) unifies them into a common model.

Splitting and scaling

The traveling between large, densely populated cities is a very different process of the usual people mingling in those cities. The usual large, dense city mingling is assumed and used in the typical compartmental epidemiological models. It seems it is a good idea to split the two processes and derive a common model.

Assume that all journeys finish within a day. We can model the people arriving (flying in) into a city as births, and people departing a city as deaths.

Let as take a simple model like SIR or SEIR and write the equation for every site we consider. This means for every site we have the same ODE’s with site-dependent initial conditions.

Consider the traveling patterns matrix K, which is a contingency matrix derived from source-destination traveling records. (Or the adjacency matrix of a travelling patterns graph.) The matrix entry of K(i,j) tells us how many people traveled from site i to site j. We systematically change the ODE’s of the sites in following way.

Assume that site a had only travelers coming from site b and going to site b. Assume that the Total Population (TP) sizes for sites a and b are N_a and N_b respectively. Assume that only people from the Susceptible Population (SP) traveled. If the adopted single-site model is SIR, [Wk1], we take the SP equation of site a

\text{SP}_a'(t)=-\frac{\beta  \text{IP}_a(t) \text{SP}_a(t)}{N_a}-\text{SP}_a(t) \mu

and change into the equation

\text{SP}_a'(t)=-\frac{\beta  \text{IP}_a(t) \text{SP}_a(t)}{N_a}-\text{SP}_a(t) \mu -\frac{K(a,b)\text{SP}_a(t)}{N_a}+\frac{K(b,a)\text{SP}_b(t)}{N_b},

assuming that

\frac{K(a,b)\text{SP}_a(t)}{N_a}\leq N_a ,\frac{K(b,a)\text{SP}_b(t)}{N_b}\leq N_b.

Remark: In the package [AAp3] the transformations above are done with the more general and robust formula:

\min \left(\frac{K(i,j)\text{SP}_i(t)}{\text{TP}_i(t)},\text{TP}_i(t)\right).

The transformed systems of ODE’s of the sites are joined into one “big” system of ODE’s, appropriate initial conditions are set, and the “big” ODE system is solved. (The sections below show concrete examples.)

Steps of MSEMEA

MSEMEA derives a compartmental model that combines (i) a graph representation of multi-site traveling patterns with (ii) a single-site compartmental epidemiological model.

Here is a visual aid for the algorithm steps below:

0i0g3m8u08bj4
0i0g3m8u08bj4
05c2sz8hd3ryj
05c2sz8hd3ryj
  1. Get a single-site epidemiological compartmental model data structure, M.
    1. The model data structure has stocks and rates dictionaries, equations, initial conditions, and prescribed rate values; see [AA2, AAp1].
  2. Derive the site-to-site traveling patterns matrix K for the sites in the graph G.
  3. For each of node i of G make a copy of the model M and denote with M[i].
    1. In general, the models M[i], i\in G have different initial conditions.
    2. The models can also have different death rates, contact rates, etc.
  4. Combine the models M[i], i\in G into the scaled model S.
    1. Change the equations of M[i], i\in G to reflect the traveling patterns matrix K.
    2. Join the systems of ODE’s of M[i], i\in G into one system of ODE’s.
  5. Set appropriate or desired initial conditions for each of the populations in S.
  6. Solve the ODE’s of S.
  7. Visualize the results.

Precaution

Care should be taken when specifying the initial conditions of MSEMEA’s system of ODE’s (sites’ populations) and the traveling patterns matrix. For example, the simulations can “blow up” if the traveling patterns matrix values are too large. As it was indicated above, the package [AAp3] puts some safe-guards, but in our experiments with random graphs and random traveling patterns matrices occasionally we still get “wild” results.

Analogy with Large scale air-pollution modeling

There is a strong analogy between MSEMEA and Eulerian models of Large Scale Air-Pollution Modeling (LSAPM), [AA3, ZZ1].

The mathematical models of LSAPM have a “chemistry part” and an “advection-diffusion part.” It is hard to treat such mathematical model directly – different kinds of splittings are used. If we consider 2D LSAPM then we can say that we cover the modeling area with steer tank reactors, then with the chemistry component we simulate the species chemical reactions in those steer tanks, and with the advection-diffusion component we change species concentrations in the steer tanks (according to some wind patterns.)

Similarly, with MSEMEA we separated the travel of population compartments from the “standard” epidemiological modeling interaction between the population compartments.

Similarly to the so called “rotational test” used in LSAPM to evaluate numerical schemes, we derive and study the results of “grid graph tests” for MSEMEA.

Single site epidemiological model

Here is the SEI2R model from the package [AAp1]:

18y99y846b10m
18y99y846b10m

Here we endow the SEI2R model with a (prominent) ID:

0alzg909zg4h0
0alzg909zg4h0

Thus we demonstrated that we can do Step 3 of MSEMEA.

Below we use ID’s that correspond to the nodes of graphs (and are integers.)

Scaling the single-site SIR model over a small complete graph

Constant travel matrices

Assume we have two sites and the following graph and matrix describe the traveling patterns between them.

Here is the graph:

0vgm31o9drq4f
0vgm31o9drq4f

And here is the traveling patterns matrix:

0lbp0xgso2tgt
0lbp0xgso2tgt

Note that there are much more travelers from 1 to 2 than from 2 to 1.

Here we obtain the core, single-site model (as shown in the section above):

Make the multi-site compartments model with SEI2R and the two-node travel matrix using the function ToSiteCompartmentsModel of [AAp2]:

Show the unique stocks in the multi-site model:

From the symbolic form of the multi-model equations derive the specific equations with the adopted rate values:

0mjliik7acoyd
0mjliik7acoyd

Show the initial conditions:

Show the total number of equations:

Solve the system of ODE’s of the extended model:

Display the solutions for each site separately:

1o9362wmczxo6
1o9362wmczxo6

From the plots above we see that both sites start with total populations of 100000 people. Because more travelers go from 1 to 2 we see that the exposed, infected, and recovered populations are larger at 2.

Time dependent travel matrices

Instead of using constant traveling patterns matrices we can use matrices with time functions as entries. It is instructive to repeat the computations above using this matrix:

1gsoh03lixm6y
1gsoh03lixm6y

Here are the corresponding number of traveling people functions:

0qh7kbtwxyatf
0qh7kbtwxyatf

Here we scale the SIR model, solve the obtained system of ODE’s, and plot the solutions:

0trv1vnslv1rm
0trv1vnslv1rm

Note that the oscillatory nature of the temporal functions in the travelling patterns matrix are reflected in the simulation results.

Constant traveling patterns over a grid graph

In this section we do the model extension and simulation over a regular grid graph with a constant traveling patterns matrix.

Here we create a grid graph with directed edges:

0l2m5npcrlnvw
0l2m5npcrlnvw

Note that:

  • There is one directed edge between any two edge-connected nodes
  • All horizontal edges point in one direction
  • All vertical edges point in one direction
  • The edges are directed from nodes with smaller indexes to nodes with larger indexes.

Here we make a constant traveling matrix and summarize it:

0d4ocoa6gibfj
0d4ocoa6gibfj

Here we scale the SEI2R model with the grid graph constant traveling matrix:

Change the initial conditions in the following way:

  • Pick initial population size per site (same for all sites)
  • Make a constant populations vector
  • At all sites except the first one put the infected populations to zero; the first site has one severely symptomatic person
  • Set the susceptible populations to be consistent with the total and infected populations.

Solve the system of ODE’s of the scaled model:

Randomly sample the graph sites and display the solutions separately for each site in the sample:

030qbpok8qfmc
030qbpok8qfmc

Display solutions of the first and last site:

0mcxwp8l1vqlb
0mcxwp8l1vqlb

As expected from the graph structure, we can see in the first site plot that its total population is decreasing – nobody is traveling to the first site. Similarly, we can see in the last site plot that its total population is increasing – nobody leaves the last site.

Graph evolution visualizations

We can visualize the spatial-temporal evolution of model’s populations using sequences of graphs. The graphs in the sequences are copies of the multi-site graph each copy having its nodes colored according to the populations in the solutions steps.

Here is a sub-sequence for the total populations:

070ld135tkf7y
070ld135tkf7y

Here is a sub-sequence for the sum of the infected populations:

0rv8vap59g8vk
0rv8vap59g8vk

Here is a sub-sequence for the recovered population:

0dnvpy20nafpz
0dnvpy20nafpz

Here is an animation of the sum of the infected populations:

1hfd1mqh0iwk7
1hfd1mqh0iwk7

Curve shapes of the globally-aggregated solutions

Let us plot for each graph vertex the sum of the solutions of the two types of infected populations. Here is a sample of those graphs:

0jwgipeg8uznn
0jwgipeg8uznn

We can see from the plot above that at the grid vertexes we have typical SEIR curve shapes for the corresponding infected populations.

Let us evaluate the solutions for the infected populations for over all graph vertexes and sum them. Here is the corresponding “globally-aggregated” plot:

03nzzmh7yydeu
03nzzmh7yydeu

We can see that the globally aggregated plot has a very different shape than the individual vertex plots. The globally aggregated plot has more symmetric look; the individual vertex plots have much steeper gradients on their left sides.

We can conjecture that a multi-site model made by MSEMEA would capture better real life situations than any single-site model. For example, by applying MSEMEA we might be more successful in our calibration attempts for the Hubei data shown (and calibrated upon) in [AA2.].

Interactive interface

With this interactive interface we see the evolution of all populations across the graph:

0fv3dbwmah3sh
0fv3dbwmah3sh

Observations

Obviously the simulations over the described grid graph, related constant traveling patterns matrix, and constant populations have the purpose to build confidence in conceptual design of MSEMEA and its implementation.

The following observations agree with our expectations for MSEMEA’s results over the “grid graph test”.

  1. The populations plots at each site resemble the typical plots of SEI2R.
  2. The total population at the first site linearly decreases.
  3. The total population at the last site linearly increases.
  4. The plots of the total populations clearly have gradually increasing gradients from the low index value nodes to the high index value nodes.
  5. For the infected populations there is a clear wave that propagates diagonally from the low index value nodes to the high index value nodes.
    1. In the direction of the general “graph flow.“
  6. The front of the infected populations wave is much steeper (gives “higher contrast”) than the tail.
    1. This should be expected from the single-site SEI2R plots.
  7. For the recovered populations there is a clear “saturation wave” pattern that indicates that the recovered populations change from 0 to values close to the corresponding final total populations.
  8. The globally aggregated solutions might have fairly different shapes than the single-site solutions. Hence, we think that MSEMEA gives a modeling ingredient that is hard to replicate or emulate by other means in single-site models.

Time-dependent traveling patterns over a random graph

In this section we apply the model extension and simulation over a random graph with random time-dependent traveling patterns matrix.

1elzu67nqndly
1elzu67nqndly

Remark: The computations in this section work with larger random graphs; we use a small graph for more legible presentation of the workflow and results. Also, the computations should clearly demonstrate the ability to do simulations with real life data.

Derive a traveling patterns matrix with entries that are random functions:

Here is a fragment of the matrix:

17ym9q0uehfbt
17ym9q0uehfbt

Summarize and plot the matrix at t=1:

0bw9rrd64615r
0bw9rrd64615r

Here we scale the SEI2R model with the random traveling matrix:

Change the initial conditions in the following way:

  • Pick maximum population size per site
  • Derive random populations for the sites
  • At all sites except the first one put the infected populations to zero; the first site has one severely symptomatic person
  • Set the susceptible populations to be consistent with the total and infected populations.

Here solve the obtained system of ODE’s:

Here we plot the solutions:

1pzmli04rbchr
1pzmli04rbchr

Graph evolution visualizations

As in the previous section we can visualize the spatial-temporal evolution of model’s populations using sequences of graphs.

Here is a globally normalized sequence:

1v8xq8jzm6ll5
1v8xq8jzm6ll5

Here is a locally normalized (“by vertex”) sequence:

029w2jtxsyaen
029w2jtxsyaen

Money from lost productivity

The model SEI2R from [AAp1] has the stock “Money from Lost Productivity” shown as \text{MLP}(t) in the equations:

1sareh7ovkmtt
1sareh7ovkmtt

Here are MLP plots from the two-node graph model:

0cm5n3xxlewns
0cm5n3xxlewns

Here we plot the sum of the accumulated money losses:

0n6gz7j3qlq07
0n6gz7j3qlq07

Here is the corresponding “daily loss” (derivative):

0jrk2ktzeeled
0jrk2ktzeeled

Future plans

There are multiple ways to extend the presented algorithm, MSEMEA. Here are a few most immediate ones:

  1. Investigate and describe the conditions under which MSEMEA performs well, and under which it “blows up”
  2. Apply MSEMEA together with single site models that have large economics parts
  3. Do real data simulations related to the spread of COVID-19.

References

Articles, books

[Wk1] Wikipedia entry, “Compartmental models in epidemiology”.

[HH1] Herbert W. Hethcote (2000). “The Mathematics of Infectious Diseases”. SIAM Review. 42 (4): 599–653. Bibcode:2000SIAMR..42..599H. doi:10.1137/s0036144500371907.

[AA1] Anton Antonov, “Coronavirus propagation modeling considerations”, (2020), SystemModeling at GitHub.

[AA2] Anton Antonov, “Basic experiments workflow for simple epidemiological models”, (2020), SystemModeling at GitHub.

[AA3] Anton Antonov, “Air pollution modeling with gridMathematica”, (2006), Wolfram Technology Conference.

[ZZ1] Zahari Zlatev, Computer Treatment of Large Air Pollution Models. 1995. Kluwer.

Repositories, packages

[WRI1] Wolfram Research, Inc., “Epidemic Data for Novel Coronavirus COVID-19”, WolframCloud.

[AAr1] Anton Antonov, Coronavirus propagation dynamics project, (2020), SystemModeling at GitHub.

[AAp1] Anton Antonov, “Epidemiology models Mathematica package”, (2020), SystemsModeling at GitHub.

[AAp2] Anton Antonov, “Epidemiology models modifications Mathematica package”, (2020), SystemsModeling at GitHub.

[AAp3] Anton Antonov, “Epidemiology modeling visualization functions Mathematica package”, (2020), SystemsModeling at GitHub.

[AAp4] Anton Antonov, “System dynamics interactive interfaces functions Mathematica package”, (2020), SystemsModeling at GitHub.

Conference abstracts similarities

Introduction

In this MathematicaVsR project we discuss and exemplify finding and analyzing similarities between texts using Latent Semantic Analysis (LSA). Both Mathematica and R codes are provided.

The LSA workflows are constructed and executed with the software monads LSAMon-WL, [AA1, AAp1], and LSAMon-R, [AAp2].

The illustrating examples are based on conference abstracts from rstudio::conf and Wolfram Technology Conference (WTC), [AAd1, AAd2]. Since the number of rstudio::conf abstracts is small and since rstudio::conf 2020 is about to start at the time of preparing this project we focus on words and texts from RStudio’s ecosystem of packages and presentations.

Statistical thesaurus for words from RStudio’s ecosystem

Consider the focus words:

{"cloud","rstudio","package","tidyverse","dplyr","analyze","python","ggplot2","markdown","sql"}

Here is a statistical thesaurus for those words:

0az70qt8noeqf
0az70qt8noeqf

Remark: Note that the computed thesaurus entries seem fairly “R-flavored.”

Similarity analysis diagrams

As expected the abstracts from rstudio::conf tend to cluster closely – note the square formed top-left in the plot of a similarity matrix based on extracted topics:

1d5a83m8cghew
1d5a83m8cghew

Here is a similarity graph based on the matrix above:

09y26s6kr3bv9
09y26s6kr3bv9

Here is a clustering (by “graph communities”) of the sub-graph highlighted in the plot above:

0rba3xgoknkwi
0rba3xgoknkwi

Notebooks

Comparison observations

LSA pipelines specifications

The packages LSAMon-WL, [AAp1], and LSAMon-R, [AAp2], make the comparison easy – the codes of the specified workflows are nearly identical.

Here is the Mathematica code:

lsaObj =
  LSAMonUnit[aDesriptions]⟹
   LSAMonMakeDocumentTermMatrix[{}, Automatic]⟹
   LSAMonEchoDocumentTermMatrixStatistics⟹
   LSAMonApplyTermWeightFunctions["IDF", "TermFrequency", "Cosine"]⟹
   LSAMonExtractTopics["NumberOfTopics" -> 36, "MinNumberOfDocumentsPerTerm" -> 2, Method -> "ICA", MaxSteps -> 200]⟹
   LSAMonEchoTopicsTable["NumberOfTableColumns" -> 6];

Here is the R code:

lsaObj <- 
  LSAMonUnit(lsDescriptions) %>% 
  LSAMonMakeDocumentTermMatrix( stemWordsQ = FALSE, stopWords = stopwords::stopwords() ) %>% 
  LSAMonApplyTermWeightFunctions( "IDF", "TermFrequency", "Cosine" ) 
  LSAMonExtractTopics( numberOfTopics = 36, minNumberOfDocumentsPerTerm = 5, method = "NNMF", maxSteps = 20, profilingQ = FALSE ) %>% 
  LSAMonEchoTopicsTable( numberOfTableColumns = 6, wideFormQ = TRUE ) 

Graphs and graphics

Mathematica’s built-in graph functions make the exploration of the similarities much easier. (Than using R.)

Mathematica’s matrix plots provide more control and are more readily informative.

Sparse matrix objects with named rows and columns

R’s built-in sparse matrices with named rows and columns are great. LSAMon-WL utilizes a similar, specially implemented sparse matrix object, see [AA1, AAp3].

References

Articles

[AA1] Anton Antonov, A monad for Latent Semantic Analysis workflows, (2019), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, Text similarities through bags of words, (2020), SimplifiedMachineLearningWorkflows-book at GitHub.

Data

[AAd1] Anton Antonov, RStudio::conf-2019-abstracts.csv, (2020), SimplifiedMachineLearningWorkflows-book at GitHub.

[AAd2] Anton Antonov, Wolfram-Technology-Conference-2016-to-2019-abstracts.csv, (2020), SimplifiedMachineLearningWorkflows-book at GitHub.

Packages

[AAp1] Anton Antonov, Monadic Latent Semantic Analysis Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp2] Anton Antonov, Latent Semantic Analysis Monad R package, (2019), R-packages at GitHub.

[AAp3] Anton Antonov, SSparseMatrix Mathematica package, (2018), MathematicaForPrediction at GitHub.

Wolfram Live-Coding Series on Latent Semantic Analysis workflows

In brief

The lectures on Latent Semantic Analysis (LSA) are to be recorded through Wolfram University (Wolfram U) in December 2019 and January-February 2020.

The lectures (as live-coding sessions)

  1. Overview Latent Semantic Analysis (LSA) typical problems and basic workflows.
    Answering preliminary anticipated questions.
    Here is the recording of the first session at Twitch .

    • What are the typical applications of LSA?
    • Why use LSA?
    • What it the fundamental philosophical or scientific assumption for LSA?
    • What is the most important and/or fundamental step of LSA?
    • What is the difference between LSA and Latent Semantic Indexing (LSI)?
    • What are the alternatives?
      • Using Neural Networks instead?
    • How is LSA used to derive similarities between two given texts?
    • How is LSA used to evaluate the proximity of phrases? (That have different words, but close semantic meaning.)
    • How the main dimension reduction methods compare?
  2. LSA for document collections.
    Here is the recording of the second session at Twitch .

    • Motivational example – full blown LSA workflow.
    • Fundamentals, text transformation (the hard way):
      • bag of words model,
      • stop words,
      • stemming.
    • The easy way with LSAMon.
    • “Eat your own dog food” example.
  3. Representation of the documents – the fundamental matrix object.
    Here is the recording of the third session at Twitch.

    • Review: last session’s example.
    • Review: the motivational example – full blown LSA workflow.
    • Linear vector space representation:
      • LSA’s most fundamental operation,
      • matrix with named rows and columns.
    • Pareto Principle adherence
      • for a document,
      • for a document collection, and
      • (in general.)
  4. Representation of unseen documents.
    Here is the recording of the fourth session at Twitch.

  5. LSA for image de-noising and classification. Here is the recording of the fifth session at Twitch.

    • Review: last session’s image collection topics extraction.
      • Let us try that two other datasets:
    • Image de-noising:
      • Using handwritten digits (again).
    • Image classification:
      • Handwritten digits.
  6. Further use cases.

Wolfram Live-Coding Series on Quantile Regression workflows

A month or so ago I was invited to make Quantile Regression presentations at the Wolfram Research Twitch channel.

The live-coding sessions

  1. In the first
    live-streaming / live-coding session
    I demonstrated how to make
    Quantile Regression
    workflows using the software monad QRMon
    and some of the underlying software design principles. (Namely
    “monadic programming”.)
  2. In the follow up live-coding session I discussed topics like outliers removal (data cleaning), anomaly detection, and structural breaks.
  3. In the third live-coding session:
    • First, we demonstrate and explain how to do QR-based time series simulations and their applications in Operations Research.
    • Next, we discuss QR in 2D and 3D and a related application.
  4. In the fourth live-coding session we discussed the following the topics.
    • Brief review of previous sessions.
    • Proclaiming the upcoming ResourceFunction["QuantileRegression"].
    • Predict tomorrow from today’s data.
    • Using NLP techniques on time series.
    • Generation of QR workflows with natural language commands.

Notebooks

  • The notebook of the 2nd session is also attached. (I added a “References” section to it.)

Update (2019-11-13)

A few days ago the Wolfram Function Repository entry QuantileRegression was approved. The resource description page has many of the topics discussed in the live-coding sessions on Quantile regression.

A monad for Latent Semantic Analysis workflows

Introduction

In this document we describe the design and implementation of a (software programming) monad, [Wk1], for Latent Semantic Analysis workflows specification and execution. The design and implementation are done with Mathematica / Wolfram Language (WL).

What is Latent Semantic Analysis (LSA)? : A statistical method (or a technique) for finding relationships in natural language texts that is based on the so called Distributional hypothesis, [Wk2, Wk3]. (The Distributional hypothesis can be simply stated as “linguistic items with similar distributions have similar meanings”; for an insightful philosophical and scientific discussion see [MS1].) LSA can be seen as the application of Dimensionality reduction techniques over matrices derived with the Vector space model.

The goal of the monad design is to make the specification of LSA workflows (relatively) easy and straightforward by following a certain main scenario and specifying variations over that scenario.

The monad is named LSAMon and it is based on the State monad package “StateMonadCodeGenerator.m”, [AAp1, AA1], the document-term matrix making package “DocumentTermMatrixConstruction.m”, [AAp4, AA2], the Non-Negative Matrix Factorization (NNMF) package “NonNegativeMatrixFactorization.m”, [AAp5, AA2], and the package “SSparseMatrix.m”, [AAp2, AA5], that provides matrix objects with named rows and columns.

The data for this document is obtained from WL’s repository and it is manipulated into a certain ready-to-utilize form (and uploaded to GitHub.)

The monadic programming design is used as a Software Design Pattern. The LSAMon monad can be also seen as a Domain Specific Language (DSL) for the specification and programming of machine learning classification workflows.

Here is an example of using the LSAMon monad over a collection of documents that consists of 233 US state of union speeches.

LSAMon-Introduction-pipeline
LSAMon-Introduction-pipeline
LSAMon-Introduction-pipeline-echos
LSAMon-Introduction-pipeline-echos

The table above is produced with the package “MonadicTracing.m”, [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad LSAMon can be seen as a DSL. Because of this the monad pipelines made with LSAMon are sometimes called “specifications”.

Remark: In this document with “term” we mean “a word, a word stem, or other type of token.”

Remark: LSA and Latent Semantic Indexing (LSI) are considered more or less to be synonyms. I think that “latent semantic analysis” sounds more universal and that “latent semantic indexing” as a name refers to a specific Information Retrieval technique. Below we refer to “LSI functions” like “IDF” and “TF-IDF” that are applied within the generic LSA workflow.

Contents description

The document has the following structure.

  • The sections “Package load” and “Data load” obtain the needed code and data.
  • The sections “Design consideration” and “Monad design” provide motivation and design decisions rationale.
  • The sections “LSAMon overview”, “Monad elements”, and “The utilization of SSparseMatrix objects” provide technical descriptions needed to utilize the LSAMon monad .
    • (Using a fair amount of examples.)
  • The section “Unit tests” describes the tests used in the development of the LSAMon monad.
    • (The random pipelines unit tests are especially interesting.)
  • The section “Future plans” outlines future directions of development.
  • The section “Implementation notes” just says that LSAMon’s development process and this document follow the ones of the classifications workflows monad ClCon, [AA6].

Remark: One can read only the sections “Introduction”, “Design consideration”, “Monad design”, and “LSAMon overview”. That set of sections provide a fairly good, programming language agnostic exposition of the substance and novel ideas of this document.

Package load

The following commands load the packages [AAp1–AAp7]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicLatentSemanticAnalysis.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

Data load

In this section we load data that is used in the rest of the document. The text data was obtained through WL’s repository, transformed in a certain more convenient form, and uploaded to GitHub.

The text summarization and plots are done through LSAMon, which in turn uses the function RecordsSummary from the package “MathematicaForPredictionUtilities.m”, [AAp7].

Hamlet

textHamlet = 
  ToString /@ 
   Flatten[Import["https://raw.githubusercontent.com/antononcube/MathematicaVsR/master/Data/MathematicaVsR-Data-Hamlet.csv"]];

TakeLargestBy[
 Tally[DeleteStopwords[ToLowerCase[Flatten[TextWords /@ textHamlet]]]], #[[2]] &, 20]

(* {{"ham", 358}, {"lord", 225}, {"king", 196}, {"o", 124}, {"queen", 120}, 
    {"shall", 114}, {"good", 109}, {"hor", 109}, {"come",  107}, {"hamlet", 107}, 
    {"thou", 105}, {"let", 96}, {"thy", 86}, {"pol", 86}, {"like", 81}, {"sir", 75}, 
    {"'t", 75}, {"know", 74}, {"enter", 73}, {"th", 72}} *)

LSAMonUnit[textHamlet]⟹LSAMonMakeDocumentTermMatrix⟹LSAMonEchoDocumentTermMatrixStatistics;
LSAMon-Data-Load-Hamlet-echo
LSAMon-Data-Load-Hamlet-echo

USA state of union speeches

url = "https://github.com/antononcube/MathematicaVsR/blob/master/Data/MathematicaVsR-Data-StateOfUnionSpeeches.JSON.zip?raw=true";
str = Import[url, "String"];
filename = First@Import[StringToStream[str], "ZIP"];
aStateOfUnionSpeeches = Association@ImportString[Import[StringToStream[str], {"ZIP", filename, "String"}], "JSON"];

lsaObj = 
LSAMonUnit[aStateOfUnionSpeeches]⟹
LSAMonMakeDocumentTermMatrix⟹
LSAMonEchoDocumentTermMatrixStatistics["LogBase" -> 10];
LSAMon-Data-Load-StateOfUnionSpeeches-echo
LSAMon-Data-Load-StateOfUnionSpeeches-echo
TakeLargest[ColumnSumsAssociation[lsaObj⟹LSAMonTakeDocumentTermMatrix], 12]

(* <|"government" -> 7106, "states" -> 6502, "congress" -> 5023, 
     "united" -> 4847, "people" -> 4103, "year" -> 4022, 
     "country" -> 3469, "great" -> 3276, "public" -> 3094, "new" -> 3022, 
     "000" -> 2960, "time" -> 2922|> *)

Stop words

In some of the examples below we want to explicitly specify the stop words. Here are stop words derived using the built-in functions DictionaryLookup and DeleteStopwords.

stopWords = Complement[DictionaryLookup["*"], DeleteStopwords[DictionaryLookup["*"]]];

Short[stopWords]

(* {"a", "about", "above", "across", "add-on", "after", "again", <<290>>, 
   "you'll", "your", "you're", "yours", "yourself", "yourselves", "you've" } *)
    

Design considerations

The steps of the main LSA workflow addressed in this document follow.

  1. Get a collection of documents with associated ID’s.
  2. Create a document-term matrix.
    1. Here we apply the Bag-or-words model and Vector space model.
      1. The sequential order of the words is ignored and each document is represented as a point in a multi-dimensional vector space.
      2. That vector space axes correspond to the unique words found in the whole document collection.
    2. Consider the application of stemming rules.
    3. Consider the removal of stop words.
  3. Apply matrix-entries weighting functions.
    1. Those functions come from LSI.
    2. Functions like “IDF”, “TF-IDF”, “GFIDF”.
  4. Extract topics.
    1. One possible statistical way of doing this is with Dimensionality reduction.
    2. We consider using Singular Value Decomposition (SVD) and Non-Negative Matrix Factorization (NNMF).
  5. Make and display the topics table.
  6. Extract and display a statistical thesaurus of selected words.
  7. Map search queries or unseen documents over the extracted topics.
  8. Find the most important documents in the document collection. (Optional.)

The following flow-chart corresponds to the list of steps above.

LSA-worflows
LSA-worflows

In order to address:

  • the introduction of new elements in LSA workflows,
  • workflows elements variability, and
  • workflows iterative changes and refining,

it is beneficial to have a DSL for LSA workflows. We choose to make such a DSL through a functional programming monad, [Wk1, AA1].

Here is a quote from [Wk1] that fairly well describes why we choose to make a classification workflow monad and hints on the desired properties of such a monad.

[…] The monad represents computations with a sequential structure: a monad defines what it means to chain operations together. This enables the programmer to build pipelines that process data in a series of steps (i.e. a series of actions applied to the data), in which each action is decorated with the additional processing rules provided by the monad. […]

Monads allow a programming style where programs are written by putting together highly composable parts, combining in flexible ways the possible actions that can work on a particular type of data. […]

Remark: Note that quote from [Wk1] refers to chained monadic operations as “pipelines”. We use the terms “monad pipeline” and “pipeline” below.

Monad design

The monad we consider is designed to speed-up the programming of LSA workflows outlined in the previous section. The monad is named LSAMon for “Latent Semantic Analysis** Mon**ad”.

We want to be able to construct monad pipelines of the general form:

LSAMon-Monad-Design-formula-1
LSAMon-Monad-Design-formula-1

LSAMon is based on the State monad, [Wk1, AA1], so the monad pipeline form (1) has the following more specific form:

LSAMon-Monad-Design-formula-2
LSAMon-Monad-Design-formula-2

This means that some monad operations will not just change the pipeline value but they will also change the pipeline context.

In the monad pipelines of LSAMon we store different objects in the contexts for at least one of the following two reasons.

  1. The object will be needed later on in the pipeline, or
  2. The object is (relatively) hard to compute.

Such objects are document-term matrix, Dimensionality reduction factors and the related topics.

Let us list the desired properties of the monad.

  • Rapid specification of non-trivial LSA workflows.
  • The text data supplied to the monad can be: (i) a list of strings, or (ii) an association with string values.
  • The monad uses the Linear vector space model .
  • The document-term frequency matrix can be created after removing stop words and/or word stemming.
  • It is easy to specify and apply different LSI weight functions. (Like “IDF” or “GFIDF”.)
  • The monad can do dimension reduction with SVD and NNMF and corresponding matrix factors are retrievable with monad functions.
  • Documents (or query strings) external to the monad are easily mapped into monad’s Linear vector space of terms and Linear vector space of topics.
  • The monad allows of cursory examination and summarization of the data.
  • The pipeline values can be of different types. (Most monad functions modify the pipeline value; some modify the context; some just echo results.)
  • It is easy to obtain the pipeline value, context, and different context objects for manipulation outside of the monad.
  • It is easy to tabulate extracted topics and related statistical thesauri.

The LSAMon components and their interactions are fairly simple.

The main LSAMon operations implicitly put in the context or utilize from the context the following objects:

  • document-term matrix,
  • the factors obtained by matrix factorization algorithms,
  • LSI weight functions specifications,
  • extracted topics.

Note the that the monadic set of types of LSAMon pipeline values is fairly heterogenous and certain awareness of “the current pipeline value” is assumed when composing LSAMon pipelines.

Obviously, we can put in the context any object through the generic operations of the State monad of the package “StateMonadGenerator.m”, [AAp1].

LSAMon overview

When using a monad we lift certain data into the “monad space”, using monad’s operations we navigate computations in that space, and at some point we take results from it.

With the approach taken in this document the “lifting” into the LSAMon monad is done with the function LSAMonUnit. Results from the monad can be obtained with the functions LSAMonTakeValue, LSAMonContext, or with the other LSAMon functions with the prefix “LSAMonTake” (see below.)

Here is a corresponding diagram of a generic computation with the LSAMon monad:

LSAMon-pipeline
LSAMon-pipeline

Remark: It is a good idea to compare the diagram with formulas (1) and (2).

Let us examine a concrete LSAMon pipeline that corresponds to the diagram above. In the following table each pipeline operation is combined together with a short explanation and the context keys after its execution.

Here is the output of the pipeline:

The LSAMon functions are separated into four groups:

  • operations,
  • setters and droppers,
  • takers,
  • State Monad generic functions.

Monad functions interaction with the pipeline value and context

An overview of the those functions is given in the tables in next two sub-sections. The next section, “Monad elements”, gives details and examples for the usage of the LSAMon operations.

LSAMon-Overview-operations-context-interactions-table
LSAMon-Overview-operations-context-interactions-table
LSAMon-Overview-setters-droppers-takers-context-interactions-table
LSAMon-Overview-setters-droppers-takers-context-interactions-table

State monad functions

Here are the LSAMon State Monad functions (generated using the prefix “LSAMon”, [AAp1, AA1].)

LSAMon-Overview-StMon-usage-descriptions-table
LSAMon-Overview-StMon-usage-descriptions-table

Main monad functions

Here are the usage descriptions of the main (not monad-supportive) LSAMon functions, which are explained in detail in the next section.

LSAMon-Overview-operations-usage-descriptions-table
LSAMon-Overview-operations-usage-descriptions-table

Monad elements

In this section we show that LSAMon has all of the properties listed in the previous section.

The monad head

The monad head is LSAMon. Anything wrapped in LSAMon can serve as monad’s pipeline value. It is better though to use the constructor LSAMonUnit. (Which adheres to the definition in [Wk1].)

LSAMon[textHamlet, <||>]⟹LSAMonMakeDocumentTermMatrix[Automatic, Automatic]⟹LSAMonEchoFunctionContext[Short];

Lifting data to the monad

The function lifting the data into the monad LSAMon is LSAMonUnit.

The lifting to the monad marks the beginning of the monadic pipeline. It can be done with data or without data. Examples follow.

LSAMonUnit[textHamlet]⟹LSAMonMakeDocumentTermMatrix⟹LSAMonTakeDocumentTermMatrix

LSAMonUnit[]⟹LSAMonSetDocuments[textHamlet]⟹LSAMonMakeDocumentTermMatrix⟹LSAMonTakeDocumentTermMatrix

(See the sub-section “Setters, droppers, and takers” for more details of setting and taking values in LSAMon contexts.)

Currently the monad can deal with data in the following forms:

  • vectors of strings,
  • associations with string values.

Generally, WL makes it easy to extract columns datasets order to obtain vectors or matrices, so datasets are not currently supported in LSAMon.

Making of the document-term matrix

As it was mentioned above with “term” we mean “a word or a stemmed word”. Here is are examples of stemmed words.

WordData[#, "PorterStem"] & /@ {"consequential", "constitution", "forcing", ""}

The fundamental model of LSAMon is the so called Vector space model (or the closely related Bag-of-words model.) The document-term matrix is a linear vector space representation of the documents collection. That representation is further used in LSAMon to find topics and statistical thesauri.

Here is an example of ad hoc construction of a document-term matrix using a couple of paragraphs from “Hamlet”.

inds = {10, 19};
aTempText = AssociationThread[inds, textHamlet[[inds]]]

MatrixForm @ CrossTabulate[Flatten[KeyValueMap[Thread[{#1, #2}] &, TextWords /@ ToLowerCase[aTempText]], 1]]

When we construct the document-term matrix we (often) want to stem the words and (almost always) want to remove stop words. LSAMon’s function LSAMonMakeDocumentTermMatrix makes the document-term matrix and takes specifications for stemming and stop words.

lsaObj =
  LSAMonUnit[textHamlet]⟹
   LSAMonMakeDocumentTermMatrix["StemmingRules" -> Automatic, "StopWords" -> Automatic]⟹
   LSAMonEchoFunctionContext[ MatrixPlot[#documentTermMatrix] &]⟹
   LSAMonEchoFunctionContext[TakeLargest[ColumnSumsAssociation[#documentTermMatrix], 12] &];

We can retrieve the stop words used in a monad with the function LSAMonTakeStopWords.

Short[lsaObj⟹LSAMonTakeStopWords]

We can retrieve the stemming rules used in a monad with the function LSAMonTakeStemmingRules.

Short[lsaObj⟹LSAMonTakeStemmingRules]

The specification Automatic for stemming rules uses WordData[#,"PorterStem"]&.

Instead of the options style signature we can use positional signature.

  • Options style: LSAMonMakeDocumentTermMatrix["StemmingRules" -> {}, "StopWords" -> Automatic] .
  • Positional style: LSAMonMakeDocumentTermMatrix[{}, Automatic] .

LSI weight functions

After making the document-term matrix we will most likely apply LSI weight functions, [Wk2], like “GFIDF” and “TF-IDF”. (This follows the “standard” approach used in search engines for calculating weights for document-term matrices; see [MB1].)

Frequency matrix

We use the following definition of the frequency document-term matrix F.

Each entry fij of the matrix F is the number of occurrences of the term j in the document i.

Weights

Each entry of the weighted document-term matrix M derived from the frequency document-term matrix F is expressed with the formula

where gj – global term weight; lij – local term weight; di – normalization weight.

Various formulas exist for these weights and one of the challenges is to find the right combination of them when using different document collections.

Here is a table of weight functions formulas.

LSAMon-LSI-weight-functions-table
LSAMon-LSI-weight-functions-table

Computation specifications

LSAMon function LSAMonApplyTermWeightFunctions delegates the LSI weight functions application to the package “DocumentTermMatrixConstruction.m”, [AAp4].

Here is an example.

lsaHamlet = LSAMonUnit[textHamlet]⟹LSAMonMakeDocumentTermMatrix;
wmat =
  lsaHamlet⟹
   LSAMonApplyTermWeightFunctions["IDF", "TermFrequency", "Cosine"]⟹
   LSAMonTakeWeightedDocumentTermMatrix;

TakeLargest[ColumnSumsAssociation[wmat], 6]

Instead of using the positional signature of LSAMonApplyTermWeightFunctions we can specify the LSI functions using options.

wmat2 =
  lsaHamlet⟹
   LSAMonApplyTermWeightFunctions["GlobalWeightFunction" -> "IDF", "LocalWeightFunction" -> "TermFrequency", "NormalizerFunction" -> "Cosine"]⟹
   LSAMonTakeWeightedDocumentTermMatrix;

TakeLargest[ColumnSumsAssociation[wmat2], 6]

Here we are summaries of the non-zero values of the weighted document-term matrix derived with different combinations of global, local, and normalization weight functions.

Magnify[#, 0.8] &@Multicolumn[Framed /@ #, 6] &@Flatten@
  Table[
   (wmat =
     lsaHamlet⟹
      LSAMonApplyTermWeightFunctions[gw, lw, nf]⟹
      LSAMonTakeWeightedDocumentTermMatrix;
    RecordsSummary[SparseArray[wmat]["NonzeroValues"], 
     List@StringRiffle[{gw, lw, nf}, ", "]]),
   {gw, {"IDF", "GFIDF", "Binary", "None", "ColumnStochastic"}},
   {lw, {"Binary", "Log", "None"}},
   {nf, {"Cosine", "None", "RowStochastic"}}]
AutoCollapse[]
LSAMon-LSI-weight-functions-combinations-application-table
LSAMon-LSI-weight-functions-combinations-application-table

Extracting topics

Streamlining topic extraction is one of the main reasons LSAMon was implemented. The topic extraction correspond to the so called “syntagmatic” relationships between the terms, [MS1].

Theoretical outline

The original weighed document-term matrix M is decomposed into the matrix factors W and H.

M ≈ W.H, W ∈ ℝm × k, H ∈ ℝk × n.

The i-th row of M is expressed with the i-th row of W multiplied by H.

The rows of H are the topics. SVD produces orthogonal topics; NNMF does not.

The i-the document of the collection corresponds to the i-th row W. Finding the Nearest Neighbors (NN’s) of the i-th document using the rows similarity of the matrix W gives document NN’s through topic similarity.

The terms correspond to the columns of H. Finding NN’s based on similarities of H’s columns produces statistical thesaurus entries.

The term groups provided by H’s rows correspond to “syntagmatic” relationships. Using similarities of H’s columns we can produce term clusters that correspond to “paradigmatic” relationships.

Computation specifications

Here is an example using the play “Hamlet” in which we specify additional stop words.

stopWords2 = {"enter", "exit", "[exit", "ham", "hor", "laer", "pol", "oph", "thy", "thee", "act", "scene"};

SeedRandom[2381]
lsaHamlet =
  LSAMonUnit[textHamlet]⟹
   LSAMonMakeDocumentTermMatrix["StemmingRules" -> Automatic, "StopWords" -> Join[stopWords, stopWords2]]⟹
   LSAMonApplyTermWeightFunctions["GlobalWeightFunction" -> "IDF", "LocalWeightFunction" -> "None", "NormalizerFunction" -> "Cosine"]⟹
   LSAMonExtractTopics["NumberOfTopics" -> 12, "MinNumberOfDocumentsPerTerm" -> 10, Method -> "NNMF", "MaxSteps" -> 20]⟹
   LSAMonEchoTopicsTable["NumberOfTableColumns" -> 6, "NumberOfTerms" -> 10];
LSAMon-Extracting-topics-Hamlet-topics-table
LSAMon-Extracting-topics-Hamlet-topics-table

Here is an example using the USA presidents “state of union” speeches.

SeedRandom[7681]
lsaSpeeches =
  LSAMonUnit[aStateOfUnionSpeeches]⟹
   LSAMonMakeDocumentTermMatrix["StemmingRules" -> Automatic,  "StopWords" -> Automatic]⟹
   LSAMonApplyTermWeightFunctions["GlobalWeightFunction" -> "IDF", "LocalWeightFunction" -> "None", "NormalizerFunction" -> "Cosine"]⟹
   LSAMonExtractTopics["NumberOfTopics" -> 36, "MinNumberOfDocumentsPerTerm" -> 40, Method -> "NNMF", "MaxSteps" -> 12]⟹
   LSAMonEchoTopicsTable["NumberOfTableColumns" -> 6, "NumberOfTerms" -> 10];
LSAMon-Extracting-topics-StateOfUnionSpeeches-topics-table
LSAMon-Extracting-topics-StateOfUnionSpeeches-topics-table

Note that in both examples:

  1. stemming is used when creating the document-term matrix,
  2. the default LSI re-weighting functions are used: “IDF”, “None”, “Cosine”,
  3. the dimension reduction algorithm NNMF is used.

Things to keep in mind.

  1. The interpretability provided by NNMF comes at a price.
  2. NNMF is prone to get stuck into local minima, so several topic extractions (and corresponding evaluations) have to be done.
  3. We would get different results with different NNMF runs using the same parameters. (NNMF uses random numbers initialization.)
  4. The NNMF topic vectors are not orthogonal.
  5. SVD is much faster than NNMF, but it topic vectors are hard to interpret.
  6. Generally, the topics derived with SVD are stable, they do not change with different runs with the same parameters.
  7. The SVD topics vectors are orthogonal, which provides for quick to find representations of documents not in the monad’s document collection.

The document-topic matrix W has column names that are automatically derived from the top three terms in each topic.

ColumnNames[lsaHamlet⟹LSAMonTakeW]

(* {"player-plai-welcom", "ro-lord-sir", "laert-king-attend",
    "end-inde-make", "state-room-castl", "daughter-pass-love",
    "hamlet-ghost-father", "father-thou-king",
    "rosencrantz-guildenstern-king", "ophelia-queen-poloniu",
    "answer-sir-mother", "horatio-attend-gentleman"} *)

Of course the row names of H have the same names.

RowNames[lsaHamlet⟹LSAMonTakeH]

(* {"player-plai-welcom", "ro-lord-sir", "laert-king-attend",
    "end-inde-make", "state-room-castl", "daughter-pass-love",
    "hamlet-ghost-father", "father-thou-king",
    "rosencrantz-guildenstern-king", "ophelia-queen-poloniu",
    "answer-sir-mother", "horatio-attend-gentleman"} *)

Extracting statistical thesauri

The statistical thesaurus extraction corresponds to the “paradigmatic” relationships between the terms, [MS1].

Here is an example over the State of Union speeches.

entryWords = {"bank", "war", "economy", "school", "port", "health", "enemy", "nuclear"};

lsaSpeeches⟹
  LSAMonExtractStatisticalThesaurus["Words" -> Map[WordData[#, "PorterStem"] &, entryWords], "NumberOfNearestNeighbors" -> 12]⟹
  LSAMonEchoStatisticalThesaurus;
LSAMon-Extracting-statistical-thesauri-echo
LSAMon-Extracting-statistical-thesauri-echo

In the code above: (i) the options signature style is used, (ii) the statistical thesaurus entry words are stemmed first.

We can also call LSAMonEchoStatisticalThesaurus directly without calling LSAMonExtractStatisticalThesaurus first.

 lsaSpeeches⟹
   LSAMonEchoStatisticalThesaurus["Words" -> Map[WordData[#, "PorterStem"] &, entryWords], "NumberOfNearestNeighbors" -> 12];
LSAMon-Extracting-statistical-thesauri-echo
LSAMon-Extracting-statistical-thesauri-echo

Mapping queries and documents to terms

One of the most natural operations is to find the representation of an arbitrary document (or sentence or a list of words) in monad’s Linear vector space of terms. This is done with the function LSAMonRepresentByTerms.

Here is an example in which a sentence is represented as a one-row matrix (in that space.)

obj =
  lsaHamlet⟹
   LSAMonRepresentByTerms["Hamlet, Prince of Denmark killed the king."]⟹
   LSAMonEchoValue;

Here we display only the non-zero columns of that matrix.

obj⟹
  LSAMonEchoFunctionValue[MatrixForm[Part[#, All, Keys[Select[SSparseMatrix`ColumnSumsAssociation[#], # > 0& ]]]]& ];

Transformation steps

Assume that LSAMonRepresentByTerms is given a list of sentences. Then that function performs the following steps.

1. The sentence is split into a list of words.

2. If monad’s document-term matrix was made by removing stop words the same stop words are removed from the list of words.

3. If monad’s document-term matrix was made by stemming the same stemming rules are applied to the list of words.

4. The LSI global weights and the LSI local weight and normalizer functions are applied to sentence’s contingency matrix.

Equivalent representation

Let us look convince ourselves that documents used in the monad to built the weighted document-term matrix have the same representation as the corresponding rows of that matrix.

Here is an association of documents from monad’s document collection.

inds = {6, 10};
queries = Part[lsaHamlet⟹LSAMonTakeDocuments, inds];
queries
 
(* <|"id.0006" -> "Getrude, Queen of Denmark, mother to Hamlet. Ophelia, daughter to Polonius.", 
     "id.0010" -> "ACT I. Scene I. Elsinore. A platform before the Castle."|> *)

lsaHamlet⟹
  LSAMonRepresentByTerms[queries]⟹
  LSAMonEchoFunctionValue[MatrixForm[Part[#, All, Keys[Select[SSparseMatrix`ColumnSumsAssociation[#], # > 0& ]]]]& ];
LSAMon-Mapping-queries-and-documents-to-topics-query-matrix
LSAMon-Mapping-queries-and-documents-to-topics-query-matrix
lsaHamlet⟹
  LSAMonEchoFunctionContext[MatrixForm[Part[Slot["weightedDocumentTermMatrix"], inds, Keys[Select[SSparseMatrix`ColumnSumsAssociation[Part[Slot["weightedDocumentTermMatrix"], inds, All]], # > 0& ]]]]& ];
LSAMon-Mapping-queries-and-documents-to-topics-context-sub-matrix
LSAMon-Mapping-queries-and-documents-to-topics-context-sub-matrix

Mapping queries and documents to topics

Another natural operation is to find the representation of an arbitrary document (or a list of words) in monad’s Linear vector space of topics. This is done with the function LSAMonRepresentByTopics.

Here is an example.

inds = {6, 10};
queries = Part[lsaHamlet⟹LSAMonTakeDocuments, inds];
Short /@ queries

(* <|"id.0006" -> "Getrude, Queen of Denmark, mother to Hamlet. Ophelia, daughter to Polonius.", 
     "id.0010" -> "ACT I. Scene I. Elsinore. A platform before the Castle."|> *)

lsaHamlet⟹
  LSAMonRepresentByTopics[queries]⟹
  LSAMonEchoFunctionValue[MatrixForm[Part[#, All, Keys[Select[SSparseMatrix`ColumnSumsAssociation[#], # > 0& ]]]]& ];
LSAMon-Mapping-queries-and-documents-to-terms-query-matrix
LSAMon-Mapping-queries-and-documents-to-terms-query-matrix
lsaHamlet⟹
  LSAMonEchoFunctionContext[MatrixForm[Part[Slot["W"], inds, Keys[Select[SSparseMatrix`ColumnSumsAssociation[Part[Slot["W"], inds, All]], # > 0& ]]]]& ];
LSAMon-Mapping-queries-and-documents-to-terms-query-matrix
LSAMon-Mapping-queries-and-documents-to-terms-query-matrix

Theory

In order to clarify what the function LSAMonRepresentByTopics is doing let us go through the formulas it is based on.

The original weighed document-term matrix M is decomposed into the matrix factors W and H.

M ≈ W.H, W ∈ ℝm × k, H ∈ ℝk × n

The i-th row of M is expressed with the i-th row of W multiplied by H.

mi ≈ wi.H.

For a query vector q0 ∈ ℝm we want to find its topics representation vector x ∈ ℝk:

q0 ≈ x.H.

Denote with H( − 1) the inverse or pseudo-inverse matrix of H. We have:

q0.H( − 1) ≈ (x.H).H( − 1) = x.(H.H( − 1)) = x.I,

x ∈ ℝk, H( − 1) ∈ ℝn × k, I ∈ ℝk × k.

In LSAMon for SVD H( − 1) = HT; for NNMF H( − 1) is the pseudo-inverse of H.

The vector x obtained with LSAMonRepresentByTopics.

Tags representation

Sometimes we want to find the topics representation of tags associated with monad’s documents and the tag-document associations are one-to-many. See [AA3].

Let us consider a concrete example – we want to find what topics correspond to the different presidents in the collection of State of Union speeches.

Here we find the document tags (president names in this case.)

tags = StringReplace[
   RowNames[
    lsaSpeeches⟹LSAMonTakeDocumentTermMatrix], 
   RegularExpression[".\\d\\d\\d\\d-\\d\\d-\\d\\d"] -> ""];
Short[tags]

Here is the number of unique tags (president names.)

Length[Union[tags]]
(* 42 *)

Here we compute the tag-topics representation matrix using the function LSAMonRepresentDocumentTagsByTopics.

tagTopicsMat =
 lsaSpeeches⟹
  LSAMonRepresentDocumentTagsByTopics[tags]⟹
  LSAMonTakeValue

Here is a heatmap plot of the tag-topics matrix made with the package “HeatmapPlot.m”, [AAp11].

HeatmapPlot[tagTopicsMat[[All, Ordering@ColumnSums[tagTopicsMat]]], DistanceFunction -> None, ImageSize -> Large]
LSAMon-Tags-representation-heatmap
LSAMon-Tags-representation-heatmap

Finding the most important documents

There are several algorithms we can apply for finding the most important documents in the collection. LSAMon utilizes two types algorithms: (1) graph centrality measures based, and (2) matrix factorization based. With certain graph centrality measures the two algorithms are equivalent. In this sub-section we demonstrate the matrix factorization algorithm (that uses SVD.)

Definition: The most important sentences have the most important words and the most important words are in the most important sentences.

That definition can be used to derive an iterations-based model that can be expressed with SVD or eigenvector finding algorithms, [LE1].

Here we pick an important part of the play “Hamlet”.

focusText = 
  First@Pick[textHamlet, StringMatchQ[textHamlet, ___ ~~ "to be" ~~ __ ~~ "or not to be" ~~ ___, IgnoreCase -> True]];
Short[focusText]

(* "Ham. To be, or not to be- that is the question: Whether 'tis ....y. 
    O, woe is me T' have seen what I have seen, see what I see!" *)

LSAMonUnit[StringSplit[ToLowerCase[focusText], {",", ".", ";", "!", "?"}]]⟹
  LSAMonMakeDocumentTermMatrix["StemmingRules" -> {}, "StopWords" -> Automatic]⟹
  LSAMonApplyTermWeightFunctions⟹
  LSAMonFindMostImportantDocuments[3]⟹
  LSAMonEchoFunctionValue[GridTableForm];
LSAMon-Find-most-important-documents-table
LSAMon-Find-most-important-documents-table

Setters, droppers, and takers

The values from the monad context can be set, obtained, or dropped with the corresponding “setter”, “dropper”, and “taker” functions as summarized in a previous section.

For example:

p = LSAMonUnit[textHamlet]⟹LSAMonMakeDocumentTermMatrix[Automatic, Automatic];

p⟹LSAMonTakeMatrix

If other values are put in the context they can be obtained through the (generic) function LSAMonTakeContext, [AAp1]:

Short@(p⟹QRMonTakeContext)["documents"]
 
(* <|"id.0001" -> "1604", "id.0002" -> "THE TRAGEDY OF HAMLET, PRINCE OF DENMARK", <<220>>, "id.0223" -> "THE END"|> *) 

Another generic function from [AAp1] is LSAMonTakeValue (used many times above.)

Here is an example of the “data dropper” LSAMonDropDocuments:

Keys[p⟹LSAMonDropDocuments⟹QRMonTakeContext]

(* {"documentTermMatrix", "terms", "stopWords", "stemmingRules"} *)

(The “droppers” simply use the state monad function LSAMonDropFromContext, [AAp1]. For example, LSAMonDropDocuments is equivalent to LSAMonDropFromContext[“documents”].)

The utilization of SSparseMatrix objects

The LSAMon monad heavily relies on SSparseMatrix objects, [AAp6, AA5], for internal representation of data and computation results.

A SSparseMatrix object is a matrix with named rows and columns.

Here is an example.

n = 6;
rmat = ToSSparseMatrix[
   SparseArray[{{1, 2} -> 1, {4, 5} -> 1}, {n, n}], 
   "RowNames" -> RandomSample[CharacterRange["A", "Z"], n], 
   "ColumnNames" -> RandomSample[CharacterRange["a", "z"], n]];
MatrixForm[rmat]
LSAMon-The-utilization-of-SSparseMatrix-random-matrix
LSAMon-The-utilization-of-SSparseMatrix-random-matrix

In this section we look into some useful SSparseMatrix idioms applied within LSAMon.

Visualize with sorted rows and columns

In some situations it is beneficial to sort rows and columns of the (weighted) document-term matrix.

docTermMat = 
  lsaSpeeches⟹LSAMonTakeDocumentTermMatrix;
MatrixPlot[docTermMat[[Ordering[RowSums[docTermMat]],  Ordering[ColumnSums[docTermMat]]]], MaxPlotPoints -> 300, ImageSize -> Large]
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeces-docTermMat-plot
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeces-docTermMat-plot

The most popular terms in the document collection can be found through the association of the column sums of the document-term matrix.

TakeLargest[ColumnSumsAssociation[lsaSpeeches⟹LSAMonTakeDocumentTermMatrix], 10]

(* <|"state" -> 8852, "govern" -> 8147, "year" -> 6362, "nation" -> 6182,
     "congress" -> 5040, "unit" -> 5040, "countri" -> 4504, 
     "peopl" -> 4306, "american" -> 3648, "law" -> 3496|> *)
     

Similarly for the lest popular terms.

TakeSmallest[
 ColumnSumsAssociation[
  lsaSpeeches⟹LSAMonTakeDocumentTermMatrix], 10]

(* <|"036" -> 1, "027" -> 1, "_____________________" -> 1, "0111" -> 1, 
     "006" -> 1, "0000" -> 1, "0001" -> 1, "______________________" -> 1, 
     "____" -> 1, "____________________" -> 1|> *)

Showing only non-zero columns

In some cases we want to show only columns of the data or computation results matrices that have non-zero elements.

Here is an example (similar to other examples in the previous section.)

lsaHamlet⟹
  LSAMonRepresentByTerms[{"this country is rotten", 
    "where is my sword my lord", 
    "poison in the ear should be in the play"}]⟹
  LSAMonEchoFunctionValue[ MatrixForm[#1[[All, Keys[Select[ColumnSumsAssociation[#1], #1 > 0 &]]]]] &];
LSAMon-The-utilization-of-SSparseMatrix-lsaHamlet-queries-to-terms-matrix
LSAMon-The-utilization-of-SSparseMatrix-lsaHamlet-queries-to-terms-matrix

In the pipeline code above: (i) from the list of queries a representation matrix is made, (ii) that matrix is assigned to the pipeline value, (iii) in the pipeline echo value function the non-zero columns are selected with by using the keys of the non-zero elements of the association obtained with ColumnSumsAssociation.

Similarities based on representation by terms

Here is way to compute the similarity matrix of different sets of documents that are not required to be in monad’s document collection.

sMat1 =
 lsaSpeeches⟹
  LSAMonRepresentByTerms[ aStateOfUnionSpeeches[[ Range[-5, -2] ]] ]⟹
  LSAMonTakeValue

sMat2 =
 lsaSpeeches⟹
  LSAMonRepresentByTerms[ aStateOfUnionSpeeches[[ Range[-7, -3] ]] ]⟹
  LSAMonTakeValue

MatrixForm[sMat1.Transpose[sMat2]]
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeches-terms-similarities-matrix
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeches-terms-similarities-matrix

Similarities based on representation by topics

Similarly to weighted Boolean similarities matrix computation above we can compute a similarity matrix using the topics representations. Note that an additional normalization steps is required.

sMat1 =
  lsaSpeeches⟹
   LSAMonRepresentByTopics[ aStateOfUnionSpeeches[[ Range[-5, -2] ]] ]⟹
   LSAMonTakeValue;
sMat1 = WeightTermsOfSSparseMatrix[sMat1, "None", "None", "Cosine"]

sMat2 =
  lsaSpeeches⟹
   LSAMonRepresentByTopics[ aStateOfUnionSpeeches[[ Range[-7, -3] ]] ]⟹ 
   LSAMonTakeValue;
sMat2 = WeightTermsOfSSparseMatrix[sMat2, "None", "None", "Cosine"]

MatrixForm[sMat1.Transpose[sMat2]]
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeches-topics-similarities-matrix
LSAMon-The-utilization-of-SSparseMatrix-lsaSpeeches-topics-similarities-matrix

Note the differences with the weighted Boolean similarity matrix in the previous sub-section – the similarities that are less than 1 are noticeably larger.

Unit tests

The development of LSAMon was done with two types of unit tests: (i) directly specified tests, [AAp7], and (ii) tests based on randomly generated pipelines, [AA8].

The unit test package should be further extended in order to provide better coverage of the functionalities and illustrate – and postulate – pipeline behavior.

Directly specified tests

Here we run the unit tests file “MonadicLatentSemanticAnalysis-Unit-Tests.wlt”, [AAp8].

AbsoluteTiming[
 testObject = TestReport["~/MathematicaForPrediction/UnitTests/MonadicLatentSemanticAnalysis-Unit-Tests.wlt"]
]

The natural language derived test ID’s should give a fairly good idea of the functionalities covered in [AAp3].

Values[Map[#["TestID"] &, testObject["TestResults"]]]

(* {"LoadPackage", "USASpeechesData", "HamletData", "StopWords", 
    "Make-document-term-matrix-1", "Make-document-term-matrix-2",
    "Apply-term-weights-1", "Apply-term-weights-2", "Topic-extraction-1",
    "Topic-extraction-2", "Topic-extraction-3", "Topic-extraction-4",
    "Statistical-thesaurus-1", "Topics-representation-1",
    "Take-document-term-matrix-1", "Take-weighted-document-term-matrix-1",
    "Take-document-term-matrix-2", "Take-weighted-document-term-matrix-2",
    "Take-terms-1", "Take-Factors-1", "Take-Factors-2", "Take-Factors-3",
    "Take-Factors-4", "Take-StopWords-1", "Take-StemmingRules-1"} *)

Random pipelines tests

Since the monad LSAMon is a DSL it is natural to test it with a large number of randomly generated “sentences” of that DSL. For the LSAMon DSL the sentences are LSAMon pipelines. The package “MonadicLatentSemanticAnalysisRandomPipelinesUnitTests.m”, [AAp9], has functions for generation of LSAMon random pipelines and running them as verification tests. A short example follows.

Generate pipelines:

SeedRandom[234]
pipelines = MakeLSAMonRandomPipelines[100];
Length[pipelines]

(* 100 *)

Here is a sample of the generated pipelines:

LSAMon-Unit-tests-random-pipelines-sample-table
LSAMon-Unit-tests-random-pipelines-sample-table

Here we run the pipelines as unit tests:

AbsoluteTiming[
 res = TestRunLSAMonPipelines[pipelines, "Echo" -> False];
]

From the test report results we see that a dozen tests failed with messages, all of the rest passed.

rpTRObj = TestReport[res]

(The message failures, of course, have to be examined – some bugs were found in that way. Currently the actual test messages are expected.)

Future plans

Dimension reduction extensions

It would be nice to extend the Dimension reduction functionalities of LSAMon to include other algorithms like Independent Component Analysis (ICA), [Wk5]. Ideally with LSAMon we can do comparisons between SVD, NNMF, and ICA like the image de-nosing based comparison explained in [AA8].

Another direction is to utilize Neural Networks for the topic extraction and making of statistical thesauri.

Conversational agent

Since LSAMon is a DSL it can be relatively easily interfaced with a natural language interface.

Here is an example of natural language commands parsed into LSA code using the package [AAp13].

LSAMon-Future-directions-parsed-LSA-commands-table
LSAMon-Future-directions-parsed-LSA-commands-table

Implementation notes

The implementation methodology of the LSAMon monad packages [AAp3, AAp9] followed the methodology created for the ClCon monad package [AAp10, AA6]. Similarly, this document closely follows the structure and exposition of the `ClCon monad document “A monad for classification workflows”, [AA6].

A lot of the functionalities and signatures of LSAMon were designed and programed through considerations of natural language commands specifications given to a specialized conversational agent.

References

Packages

[AAp1] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub*.

[AAp2] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub*.

[AAp3] Anton Antonov, Monadic Latent Semantic Analysis Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp4] Anton Antonov, Implementation of document-term matrix construction and re-weighting functions in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp5] Anton Antonov, Non-Negative Matrix Factorization algorithm implementation in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp6] Anton Antonov, SSparseMatrix Mathematica package, (2018), MathematicaForPrediction at GitHub.

[AAp7] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub.

[AAp8] Anton Antonov, Monadic Latent Semantic Analysis unit tests, (2019), MathematicaVsR at GitHub.

[AAp9] Anton Antonov, Monadic Latent Semantic Analysis random pipelines Mathematica unit tests, (2019), MathematicaForPrediction at GitHub.

[AAp10] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp11] Anton Antonov, Heatmap plot Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp12] Anton Antonov,
Independent Component Analysis Mathematica package, MathematicaForPrediction at GitHub.

[AAp13] Anton Antonov, Latent semantic analysis workflows grammar in EBNF, (2018), ConverasationalAgents at GitHub.

MathematicaForPrediction articles

[AA1] Anton Antonov, “Monad code generation and extension”, (2017), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, “Topic and thesaurus extraction from a document collection”, (2013), MathematicaForPrediction at GitHub.

[AA3] Anton Antonov, “The Great conversation in USA presidential speeches”, (2017), MathematicaForPrediction at WordPress.

[AA4] Anton Antonov, “Contingency tables creation examples”, (2016), MathematicaForPrediction at WordPress.

[AA5] Anton Antonov, “RSparseMatrix for sparse matrices with named rows and columns”, (2015), MathematicaForPrediction at WordPress.

[AA6] Anton Antonov, “A monad for classification workflows”, (2018), MathematicaForPrediction at WordPress.

[AA7] Anton Antonov, “Independent component analysis for multidimensional signals”, (2016), MathematicaForPrediction at WordPress.

[AA8] Anton Antonov, “Comparison of PCA, NNMF, and ICA over image de-noising”, (2016), MathematicaForPrediction at WordPress.

Other

[Wk1] Wikipedia entry, Monad,

[Wk2] Wikipedia entry, Latent semantic analysis,

[Wk3] Wikipedia entry, Distributional semantics,

[Wk4] Wikipedia entry, Non-negative matrix factorization,

[LE1] Lars Elden, Matrix Methods in Data Mining and Pattern Recognition, 2007, SIAM. ISBN-13: 978-0898716269.

[MB1] Michael W. Berry & Murray Browne, Understanding Search Engines: Mathematical Modeling and Text Retrieval, 2nd. ed., 2005, SIAM. ISBN-13: 978-0898715811.

[MS1] Magnus Sahlgren, “The Distributional Hypothesis”, (2008), Rivista di Linguistica. 20 (1): 33[Dash]53.

[PS1] Patrick Scheibe, Mathematica (Wolfram Language) support for IntelliJ IDEA, (2013-2018), Mathematica-IntelliJ-Plugin at GitHub.

Finding all structural breaks in time series

Introduction

In this document we show how to find the so called “structural breaks”, [Wk1], in a given time series. The algorithm is based in on a systematic application of Chow Test, [Wk2], combined with an algorithm for local extrema finding in noisy time series, [AA1].

The algorithm implementation is based on the packages “MonadicQuantileRegression.m”, [AAp1], and “MonadicStructuralBreaksFinder.m”, [AAp2]. The package [AAp1] provides the software monad QRMon that allows rapid and concise specification of Quantile Regression workflows. The package [AAp2] extends QRMon with functionalities related to structural breaks finding.

What is a structural break?

It looks like at least one type of “structural breaks” are defined through regression models, [Wk1]. Roughly speaking a structural break point of time series is a regressor point that splits the time series in such way that the obtained two parts have very different regression parameters.

One way to test such a point is to use Chow test, [Wk2]. From [Wk2] we have the definition:

The Chow test, proposed by econometrician Gregory Chow in 1960, is a test of whether the true coefficients in two linear regressions on different data sets are equal. In econometrics, it is most commonly used in time series analysis to test for the presence of a structural break at a period which can be assumed to be known a priori (for instance, a major historical event such as a war).

Example

Here is an example of the described algorithm application to the data from [Wk2].

QRMonUnit[data]⟹QRMonPlotStructuralBreakSplits[ImageSize -> Small];
IntroductionsExample
IntroductionsExample

Load packages

Here we load the packages [AAp1] and [AAp2].

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicQuantileRegression.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicStructuralBreaksFinder.m"]

Data used

In this section we assign the data used in this document.

Illustration data from Wikipedia

Here is the data used in the Wikipedia article “Chow test”, [Wk2].

data = {{0.08, 0.34}, {0.16, 0.55}, {0.24, 0.54}, {0.32, 0.77}, {0.4, 
    0.77}, {0.48, 1.2}, {0.56, 0.57}, {0.64, 1.3}, {0.72, 1.}, {0.8, 
    1.3}, {0.88, 1.2}, {0.96, 0.88}, {1., 1.2}, {1.1, 1.3}, {1.2, 
    1.3}, {1.3, 1.4}, {1.4, 1.5}, {1.4, 1.5}, {1.5, 1.5}, {1.6, 
    1.6}, {1.7, 1.1}, {1.8, 0.98}, {1.8, 1.1}, {1.9, 1.4}, {2., 
    1.3}, {2.1, 1.5}, {2.2, 1.3}, {2.2, 1.3}, {2.3, 1.2}, {2.4, 
    1.1}, {2.5, 1.1}, {2.6, 1.2}, {2.6, 1.4}, {2.7, 1.3}, {2.8, 
    1.6}, {2.9, 1.5}, {3., 1.4}, {3., 1.8}, {3.1, 1.4}, {3.2, 
    1.4}, {3.3, 1.4}, {3.4, 2.}, {3.4, 2.}, {3.5, 1.5}, {3.6, 
    1.8}, {3.7, 2.1}, {3.8, 1.6}, {3.8, 1.8}, {3.9, 1.9}, {4., 2.1}};
ListPlot[data]
DataUsedWk2
DataUsedWk2

S&P 500 Index

Here we get the time series corresponding to S&P 500 Index.

tsSP500 = FinancialData[Entity["Financial", "^SPX"], {{2015, 1, 1}, Date[]}]
DateListPlot[tsSP500, ImageSize -> Medium]
DataUsedSP500
DataUsedSP500

Application of Chow Test

The Chow Test statistic is implemented in [AAp1]. In this document we rely on the relative comparison of the Chow Test statistic values: the larger the value of the Chow test statistic, the more likely we have a structural break.

Here is how we can apply the Chow Test with a QRMon pipeline to the [Wk2] data given above.

chowStats =
  QRMonUnit[data]⟹
   QRMonChowTestStatistic[Range[1, 3, 0.05], {1, x}]⟹
   QRMonTakeValue;

We see that the regressor points $Failed and 1.7 have the largest Chow Test statistic values.

Block[{chPoint = TakeLargestBy[chowStats, Part[#, 2]& , 1]}, 
ListPlot[{chowStats, chPoint}, Filling -> Axis, PlotLabel -> Row[{"Point with largest Chow Test statistic:", 
Spacer[8], chPoint}]]]
ApplicationOfChowTestchowStats
ApplicationOfChowTestchowStats

The first argument of QRMonChowTestStatistic is a list of regressor points or Automatic. The second argument is a list of functions to be used for the regressions.

Here is an example of an automatic values call.

chowStats2 = QRMonUnit[data]⟹QRMonChowTestStatistic⟹QRMonTakeValue;
ListPlot[chowStats2, GridLines -> {
Part[
Part[chowStats2, All, 1], 
OutlierIdentifiers`OutlierPosition[
Part[chowStats2, All, 2],  OutlierIdentifiers`SPLUSQuartileIdentifierParameters]], None}, GridLinesStyle -> Directive[{Orange, Dashed}], Filling -> Axis]
ApplicationOfChowTestchowStats2
ApplicationOfChowTestchowStats2

For the set of values displayed above we can apply simple 1D outlier identification methods, [AAp3], to automatically find the structural break point.

chowStats2[[All, 1]][[OutlierPosition[chowStats2[[All, 2]], SPLUSQuartileIdentifierParameters]]]
(* {1.7} *)

OutlierPosition[chowStats2[[All, 2]], SPLUSQuartileIdentifierParameters]
(* {20} *)

We cannot use that approach for finding all structural breaks in the general time series cases though as exemplified with the following code using the time series S&P 500 Index.

chowStats3 = QRMonUnit[tsSP500]⟹QRMonChowTestStatistic⟹QRMonTakeValue;
DateListPlot[chowStats3, Joined -> False, Filling -> Axis]
ApplicationOfChowTestSP500
ApplicationOfChowTestSP500
OutlierPosition[chowStats3[[All, 2]], SPLUSQuartileIdentifierParameters]
(* {} *)
 
OutlierPosition[chowStats3[[All, 2]], HampelIdentifierParameters]
(* {} *)

In the rest of the document we provide an algorithm that works for general time series.

Finding all structural break points

Consider the problem of finding of all structural breaks in a given time series. That can be done (reasonably well) with the following procedure.

  1. Chose functions for testing for structural breaks (usually linear.)
  2. Apply Chow Test over dense enough set of regressor points.
  3. Make a time series of the obtained Chow Test statistics.
  4. Find the local maxima of the Chow Test statistics time series.
  5. Determine the most significant break point.
  6. Plot the splits corresponding to the found structural breaks.

QRMon has a function, QRMonFindLocalExtrema, for finding local extrema; see [AAp1, AA1]. For the goal of finding all structural breaks, that semi-symbolic algorithm is the crucial part in the steps above.

Computation

Chose fitting functions

fitFuncs = {1, x};

Find Chow test statistics local maxima

The computation below combines steps 2,3, and 4.

qrObj =
  QRMonUnit[tsSP500]⟹
   QRMonFindChowTestLocalMaxima["Knots" -> 20, 
    "NearestWithOutliers" -> True, 
    "NumberOfProximityPoints" -> 5, "EchoPlots" -> True, 
    "DateListPlot" -> True, 
    ImageSize -> Medium]⟹
   QRMonEchoValue;
ComputationLocalMaxima
ComputationLocalMaxima

Find most significant structural break point

splitPoint = TakeLargestBy[qrObj⟹QRMonTakeValue, #[[2]] &, 1][[1, 1]]

Plot structural breaks splits and corresponding fittings

Here we just make the plots without showing them.

sbPlots =
  QRMonUnit[tsSP500]⟹
   QRMonPlotStructuralBreakSplits[(qrObj⟹ QRMonTakeValue)[[All, 1]], 
    "LeftPartColor" -> Gray, "DateListPlot" -> True, 
    "Echo" -> False, 
    ImageSize -> Medium]⟹
   QRMonTakeValue;
   

The function QRMonPlotStructuralBreakSplits returns an association that has as keys paired split points and Chow Test statistics; the plots are association’s values.

Here we tabulate the plots with plots with most significant breaks shown first.

Multicolumn[
 KeyValueMap[
  Show[#2, PlotLabel -> 
     Grid[{{"Point:", #1[[1]]}, {"Chow Test statistic:", #1[[2]]}}, Alignment -> Left]] &, KeySortBy[sbPlots, -#[[2]] &]], 2]
ComputationStructuralBreaksPlots
ComputationStructuralBreaksPlots

Future plans

We can further apply the algorithm explained above to identifying time series states or components. The structural break points are used as knots in appropriate Quantile Regression fitting. Here is an example.

The plan is to develop such an identifier of time series states in the near future. (And present it at WTC-2019.)

FuturePlansTimeSeriesStates
FuturePlansTimeSeriesStates

References

Articles

[Wk1] Wikipedia entry, Structural breaks.

[Wk2] Wikipedia entry, Chow test.

[AA1] Anton Antonov, “Finding local extrema in noisy data using Quantile Regression”, (2019), MathematicaForPrediction at WordPress.

[AA2] Anton Antonov, “A monad for Quantile Regression workflows”, (2018), MathematicaForPrediction at GitHub.

Packages

[AAp1] Anton Antonov, Monadic Quantile Regression Mathematica package, (2018), MathematicaForPrediction at GitHub.

[AAp2] Anton Antonov, Monadic Structural Breaks Finder Mathematica package, (2019), MathematicaForPrediction at GitHub.

[AAp3] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub.

Videos

[AAv1] Anton Antonov, Structural Breaks with QRMon, (2019), YouTube.