NY Times COVID-19 data visualization

Yesterday in one of the forums I frequent it was announced that New York Times has published COVID-19 data on GitHub. I decided to make a Mathematica notebook that gives data links and related code for data ingestions. (And rudimentary data analysis.)

Here is the Markdown version of the notebook: “NY Times COVID-19 data visualization”.

Here is a screenshot of the WL notebook that also links to it:

Screenshot of an interactive interface:

Histograms and Pareto principle adherence:

Conference abstracts similarities

Introduction

In this MathematicaVsR project we discuss and exemplify finding and analyzing similarities between texts using Latent Semantic Analysis (LSA). Both Mathematica and R codes are provided.

The LSA workflows are constructed and executed with the software monads LSAMon-WL, [AA1, AAp1], and LSAMon-R, [AAp2].

The illustrating examples are based on conference abstracts from rstudio::conf and Wolfram Technology Conference (WTC), [AAd1, AAd2]. Since the number of rstudio::conf abstracts is small and since rstudio::conf 2020 is about to start at the time of preparing this project we focus on words and texts from RStudio’s ecosystem of packages and presentations.

Statistical thesaurus for words from RStudio’s ecosystem

Consider the focus words:

{"cloud","rstudio","package","tidyverse","dplyr","analyze","python","ggplot2","markdown","sql"}

Here is a statistical thesaurus for those words:

0az70qt8noeqf
0az70qt8noeqf

Remark: Note that the computed thesaurus entries seem fairly “R-flavored.”

Similarity analysis diagrams

As expected the abstracts from rstudio::conf tend to cluster closely – note the square formed top-left in the plot of a similarity matrix based on extracted topics:

1d5a83m8cghew
1d5a83m8cghew

Here is a similarity graph based on the matrix above:

09y26s6kr3bv9
09y26s6kr3bv9

Here is a clustering (by “graph communities”) of the sub-graph highlighted in the plot above:

0rba3xgoknkwi
0rba3xgoknkwi

Notebooks

Comparison observations

LSA pipelines specifications

The packages LSAMon-WL, [AAp1], and LSAMon-R, [AAp2], make the comparison easy – the codes of the specified workflows are nearly identical.

Here is the Mathematica code:

lsaObj =
  LSAMonUnit[aDesriptions]⟹
   LSAMonMakeDocumentTermMatrix[{}, Automatic]⟹
   LSAMonEchoDocumentTermMatrixStatistics⟹
   LSAMonApplyTermWeightFunctions["IDF", "TermFrequency", "Cosine"]⟹
   LSAMonExtractTopics["NumberOfTopics" -> 36, "MinNumberOfDocumentsPerTerm" -> 2, Method -> "ICA", MaxSteps -> 200]⟹
   LSAMonEchoTopicsTable["NumberOfTableColumns" -> 6];

Here is the R code:

lsaObj <- 
  LSAMonUnit(lsDescriptions) %>% 
  LSAMonMakeDocumentTermMatrix( stemWordsQ = FALSE, stopWords = stopwords::stopwords() ) %>% 
  LSAMonApplyTermWeightFunctions( "IDF", "TermFrequency", "Cosine" ) 
  LSAMonExtractTopics( numberOfTopics = 36, minNumberOfDocumentsPerTerm = 5, method = "NNMF", maxSteps = 20, profilingQ = FALSE ) %>% 
  LSAMonEchoTopicsTable( numberOfTableColumns = 6, wideFormQ = TRUE ) 

Graphs and graphics

Mathematica’s built-in graph functions make the exploration of the similarities much easier. (Than using R.)

Mathematica’s matrix plots provide more control and are more readily informative.

Sparse matrix objects with named rows and columns

R’s built-in sparse matrices with named rows and columns are great. LSAMon-WL utilizes a similar, specially implemented sparse matrix object, see [AA1, AAp3].

References

Articles

[AA1] Anton Antonov, A monad for Latent Semantic Analysis workflows, (2019), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, Text similarities through bags of words, (2020), SimplifiedMachineLearningWorkflows-book at GitHub.

Data

[AAd1] Anton Antonov, RStudio::conf-2019-abstracts.csv, (2020), SimplifiedMachineLearningWorkflows-book at GitHub.

[AAd2] Anton Antonov, Wolfram-Technology-Conference-2016-to-2019-abstracts.csv, (2020), SimplifiedMachineLearningWorkflows-book at GitHub.

Packages

[AAp1] Anton Antonov, Monadic Latent Semantic Analysis Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp2] Anton Antonov, Latent Semantic Analysis Monad R package, (2019), R-packages at GitHub.

[AAp3] Anton Antonov, SSparseMatrix Mathematica package, (2018), MathematicaForPrediction at GitHub.

Pets licensing data analysis

Introduction

This notebook / document provides ground data analysis used to make or confirm certain modeling conjectures and assumptions of a Pets Retail Dynamics Model (PRDM), [AA1]. Seattle pets licensing data is used, [SOD2].

We want to provide answers to the following questions.

  • Does the Pareto principle manifests for pets breeds?

  • Does the Pareto principle manifests for ZIP codes?

  • Is there an upward trend for becoming a pet owner?

All three questions have positive answers, assuming the retrieved data, [SOD2], is representative. See the last section for an additional discussion.

We also discuss pet adoption simulations that are done using Quantile Regression, [AA2, AAp1].

This notebook/document is part of the SystemsModeling at GitHub project “Pets retail dynamics”, [AA1].

Data

The pet licensing data was taken from this page: “Seattle Pet Licenses”, https://data.seattle.gov/Community/Seattle-Pet-Licenses/jguv-t9rb/data.

The ZIP code coordinates data was taken from a GitHub repository,
“US Zip Codes from 2013 Government Data”, https://gist.github.com/erichurst/7882666.

Animal licenses

image-3281001a-2f3d-4a8e-87b9-dc8a8b9803b3
image-3281001a-2f3d-4a8e-87b9-dc8a8b9803b3

Convert “Licence Issue Date” values into DateObjects.

Summary

image-49aecba4-2b43-40d7-87ba-15ceb848898d
image-49aecba4-2b43-40d7-87ba-15ceb848898d

Keep dogs and cats only

Since the number of animals that are not cats or dogs is very small we remove them from the data in order to produce more concise statistics.

ZIP code geo-coordinates

Summary

image-572ef441-b14e-438d-b5b7-85f244aa1857
image-572ef441-b14e-438d-b5b7-85f244aa1857
image-c0d4f154-ee22-457f-8a36-715b77c92e08
image-c0d4f154-ee22-457f-8a36-715b77c92e08

Pareto principle adherence

In this section we apply the Pareto principle statistic in order to see does the Pareto principle manifests over the different columns of the pet licensing data.

Breeds

We see a typical Pareto principle adherence for both dog breeds and cat breeds. For example, 20% of the dog breeds correspond to 80% of all registered dogs.

Note that the number of unique cat breeds is 4 times smaller than the number of unique dog breeds.

image-d1bac8f8-fe6c-42c0-8d52-45ed21ab6cc2
image-d1bac8f8-fe6c-42c0-8d52-45ed21ab6cc2
image-3c320985-1ed4-4d11-b983-29f87d4cdc7c
image-3c320985-1ed4-4d11-b983-29f87d4cdc7c

Animal names

We see a typical Pareto principle adherence for the frequencies of the pet names. For dogs, 10% of the unique names correspond to ~65% of the pets.

image-cb6368b6-b735-4f77-a3dd-bcb0be60f28e
image-cb6368b6-b735-4f77-a3dd-bcb0be60f28e
image-bbcac6bb-5247-400c-a093-f3002206b5cf
image-bbcac6bb-5247-400c-a093-f3002206b5cf

Zip codes

We see typical – even exaggerated – manifestation of the Pareto principle over ZIP codes of the registered pets.

image-72cae8dd-d342-4c90-a11d-11607545133e
image-72cae8dd-d342-4c90-a11d-11607545133e

Geo-distribution

In this section we visualize the pets licensing geo-distribution with geo-histograms.

Both cats and dogs

image-94ae1316-ada2-4195-b2fc-6864ff1fd642
image-94ae1316-ada2-4195-b2fc-6864ff1fd642

Dogs and cats separately

image-836dff19-7000-45e0-b0a4-1f3fe4a066c9
image-836dff19-7000-45e0-b0a4-1f3fe4a066c9

Pet stores

In this subsection we show the distribution of pet stores (in Seattle).

It is better instead of image retrieval to show corresponding geo-markers in the geo-histograms above. (This is not considered that important in the first version of this notebook/document.)

image-836dff19-7000-45e0-b0a4-1f3fe4a066c9
image-836dff19-7000-45e0-b0a4-1f3fe4a066c9

Time series

In this section we visualize the time series corresponding to the pet registrations.

Time series objects

Here we make time series objects:

image-49ae54cb-0644-427e-a015-0392284aaaa7
image-49ae54cb-0644-427e-a015-0392284aaaa7

Time series plots of all registrations

Here are time series plots corresponding to all registrations:

image-02632be6-ab52-41b8-959a-e200641fdd8f
image-02632be6-ab52-41b8-959a-e200641fdd8f

Time series plots of most recent registrations

It is an interesting question why the number of registrations is much higher in volume and frequency in the years 2018 and later.

image-85ebeab1-cad5-4fe3-bd5d-c7c8c94a753e
image-85ebeab1-cad5-4fe3-bd5d-c7c8c94a753e

Upward trend

Here we apply both Linear Regression and Quantile Regression:

image-6df4d9d2-e48a-4d63-885c-6ed5112c0f15
image-6df4d9d2-e48a-4d63-885c-6ed5112c0f15

We can see that there is clear upward trend for both dogs and cats.

Quantile regression application

In this section we investigate the possibility to simulate the pet adoption rate. We plan to use simulations of the pet adoption rate in PRDM.

We do that using the software monad QRMon, [AAp1]. A list of steps follows.

  • Split the time series into windows corresponding to the years 2018 and 2019.

  • Find the difference between the two years.

  • Apply Quantile Regression to the difference using a reasonable grid of probabilities.

  • Simulate the difference.

  • Add the simulated difference to year 2019.

Simulation

In this sub-section we simulate the differences between the time series for 2018 and 2019, then we add the simulated difference to the time series of the year 2019.

image-8f9e3af0-46b7-4417-bd1e-3201c1134f34
image-8f9e3af0-46b7-4417-bd1e-3201c1134f34
image-30b836dc-f166-4f21-9c0b-9cca922058e6
image-30b836dc-f166-4f21-9c0b-9cca922058e6
image-65e4d1bf-dfff-4073-88a0-63177eeed1b5
image-65e4d1bf-dfff-4073-88a0-63177eeed1b5
image-6d107cad-6fef-46c8-92a8-59ea78b5039f
image-6d107cad-6fef-46c8-92a8-59ea78b5039f
image-d0d517e0-925b-486c-88fd-287cfe02e799
image-d0d517e0-925b-486c-88fd-287cfe02e799

Take the simulated time series difference:

Add the simulated time series difference to year 2019, clip the values less than zero, shift the result to 2020:

image-2a29feca-73b8-4fce-8051-145d74ec499c
image-2a29feca-73b8-4fce-8051-145d74ec499c

Plot all years together

image-793f146a-07f9-455f-9bc7-2ef7d7897691
image-793f146a-07f9-455f-9bc7-2ef7d7897691

Discussion

This section has subsections that correspond to additional discussion questions. Not all questions are answered, the plan is to progressively answer the questions with the subsequent versions of the this notebook / document.

□ Too few pets

The number of registered pets seems too few. Seattle is a large city with more than 600000 citizens; approximately 50% of the USA households have dogs; hence the registered pets are too few (~50000).

□ Why too few pets?

Seattle is a high tech city and its citizens are too busy to have pets?

Most people do not register their pets? (Very unlikely if they have used veterinary services.)

Incomplete data?

□ Registration rates

Why the number of registrations is much higher in volume and frequency in the years 2018 and later?

□ Adoption rates

Can we tell apart the adoption rates of pet-less people and people who already have pets?

Preliminary definitions

References

[AA1] Anton Antonov, Pets retail dynamics project, (2020), SystemModeling at GitHub.

[AA2] Anton Antonov, A monad for Quantile Regression workflows, (2018), MathematicaForPrediction at WordPress.

[AAp1] Anton Antonov, Monadic Quantile Regression Mathematica package, (2018), MathematicaForPrediction at GitHub.

[SOD1] Seattle Open Data, “Seattle Pet Licenses”, https://data.seattle.gov/Community/Seattle-Pet-Licenses/jguv-t9rb/data .

Wolfram Live-Coding Series on Quantile Regression workflows

A month or so ago I was invited to make Quantile Regression presentations at the Wolfram Research Twitch channel.

The live-coding sessions

  1. In the first
    live-streaming / live-coding session
    I demonstrated how to make
    Quantile Regression
    workflows using the software monad QRMon
    and some of the underlying software design principles. (Namely
    “monadic programming”.)
  2. In the follow up live-coding session I discussed topics like outliers removal (data cleaning), anomaly detection, and structural breaks.
  3. In the third live-coding session:
    • First, we demonstrate and explain how to do QR-based time series simulations and their applications in Operations Research.
    • Next, we discuss QR in 2D and 3D and a related application.
  4. In the fourth live-coding session we discussed the following the topics.
    • Brief review of previous sessions.
    • Proclaiming the upcoming ResourceFunction["QuantileRegression"].
    • Predict tomorrow from today’s data.
    • Using NLP techniques on time series.
    • Generation of QR workflows with natural language commands.

Notebooks

  • The notebook of the 2nd session is also attached. (I added a “References” section to it.)

Update (2019-11-13)

A few days ago the Wolfram Function Repository entry QuantileRegression was approved. The resource description page has many of the topics discussed in the live-coding sessions on Quantile regression.

Finding all structural breaks in time series

Introduction

In this document we show how to find the so called “structural breaks”, [Wk1], in a given time series. The algorithm is based in on a systematic application of Chow Test, [Wk2], combined with an algorithm for local extrema finding in noisy time series, [AA1].

The algorithm implementation is based on the packages “MonadicQuantileRegression.m”, [AAp1], and “MonadicStructuralBreaksFinder.m”, [AAp2]. The package [AAp1] provides the software monad QRMon that allows rapid and concise specification of Quantile Regression workflows. The package [AAp2] extends QRMon with functionalities related to structural breaks finding.

What is a structural break?

It looks like at least one type of “structural breaks” are defined through regression models, [Wk1]. Roughly speaking a structural break point of time series is a regressor point that splits the time series in such way that the obtained two parts have very different regression parameters.

One way to test such a point is to use Chow test, [Wk2]. From [Wk2] we have the definition:

The Chow test, proposed by econometrician Gregory Chow in 1960, is a test of whether the true coefficients in two linear regressions on different data sets are equal. In econometrics, it is most commonly used in time series analysis to test for the presence of a structural break at a period which can be assumed to be known a priori (for instance, a major historical event such as a war).

Example

Here is an example of the described algorithm application to the data from [Wk2].

QRMonUnit[data]⟹QRMonPlotStructuralBreakSplits[ImageSize -> Small];
IntroductionsExample
IntroductionsExample

Load packages

Here we load the packages [AAp1] and [AAp2].

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicQuantileRegression.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicStructuralBreaksFinder.m"]

Data used

In this section we assign the data used in this document.

Illustration data from Wikipedia

Here is the data used in the Wikipedia article “Chow test”, [Wk2].

data = {{0.08, 0.34}, {0.16, 0.55}, {0.24, 0.54}, {0.32, 0.77}, {0.4, 
    0.77}, {0.48, 1.2}, {0.56, 0.57}, {0.64, 1.3}, {0.72, 1.}, {0.8, 
    1.3}, {0.88, 1.2}, {0.96, 0.88}, {1., 1.2}, {1.1, 1.3}, {1.2, 
    1.3}, {1.3, 1.4}, {1.4, 1.5}, {1.4, 1.5}, {1.5, 1.5}, {1.6, 
    1.6}, {1.7, 1.1}, {1.8, 0.98}, {1.8, 1.1}, {1.9, 1.4}, {2., 
    1.3}, {2.1, 1.5}, {2.2, 1.3}, {2.2, 1.3}, {2.3, 1.2}, {2.4, 
    1.1}, {2.5, 1.1}, {2.6, 1.2}, {2.6, 1.4}, {2.7, 1.3}, {2.8, 
    1.6}, {2.9, 1.5}, {3., 1.4}, {3., 1.8}, {3.1, 1.4}, {3.2, 
    1.4}, {3.3, 1.4}, {3.4, 2.}, {3.4, 2.}, {3.5, 1.5}, {3.6, 
    1.8}, {3.7, 2.1}, {3.8, 1.6}, {3.8, 1.8}, {3.9, 1.9}, {4., 2.1}};
ListPlot[data]
DataUsedWk2
DataUsedWk2

S&P 500 Index

Here we get the time series corresponding to S&P 500 Index.

tsSP500 = FinancialData[Entity["Financial", "^SPX"], {{2015, 1, 1}, Date[]}]
DateListPlot[tsSP500, ImageSize -> Medium]
DataUsedSP500
DataUsedSP500

Application of Chow Test

The Chow Test statistic is implemented in [AAp1]. In this document we rely on the relative comparison of the Chow Test statistic values: the larger the value of the Chow test statistic, the more likely we have a structural break.

Here is how we can apply the Chow Test with a QRMon pipeline to the [Wk2] data given above.

chowStats =
  QRMonUnit[data]⟹
   QRMonChowTestStatistic[Range[1, 3, 0.05], {1, x}]⟹
   QRMonTakeValue;

We see that the regressor points $Failed and 1.7 have the largest Chow Test statistic values.

Block[{chPoint = TakeLargestBy[chowStats, Part[#, 2]& , 1]}, 
ListPlot[{chowStats, chPoint}, Filling -> Axis, PlotLabel -> Row[{"Point with largest Chow Test statistic:", 
Spacer[8], chPoint}]]]
ApplicationOfChowTestchowStats
ApplicationOfChowTestchowStats

The first argument of QRMonChowTestStatistic is a list of regressor points or Automatic. The second argument is a list of functions to be used for the regressions.

Here is an example of an automatic values call.

chowStats2 = QRMonUnit[data]⟹QRMonChowTestStatistic⟹QRMonTakeValue;
ListPlot[chowStats2, GridLines -> {
Part[
Part[chowStats2, All, 1], 
OutlierIdentifiers`OutlierPosition[
Part[chowStats2, All, 2],  OutlierIdentifiers`SPLUSQuartileIdentifierParameters]], None}, GridLinesStyle -> Directive[{Orange, Dashed}], Filling -> Axis]
ApplicationOfChowTestchowStats2
ApplicationOfChowTestchowStats2

For the set of values displayed above we can apply simple 1D outlier identification methods, [AAp3], to automatically find the structural break point.

chowStats2[[All, 1]][[OutlierPosition[chowStats2[[All, 2]], SPLUSQuartileIdentifierParameters]]]
(* {1.7} *)

OutlierPosition[chowStats2[[All, 2]], SPLUSQuartileIdentifierParameters]
(* {20} *)

We cannot use that approach for finding all structural breaks in the general time series cases though as exemplified with the following code using the time series S&P 500 Index.

chowStats3 = QRMonUnit[tsSP500]⟹QRMonChowTestStatistic⟹QRMonTakeValue;
DateListPlot[chowStats3, Joined -> False, Filling -> Axis]
ApplicationOfChowTestSP500
ApplicationOfChowTestSP500
OutlierPosition[chowStats3[[All, 2]], SPLUSQuartileIdentifierParameters]
(* {} *)
 
OutlierPosition[chowStats3[[All, 2]], HampelIdentifierParameters]
(* {} *)

In the rest of the document we provide an algorithm that works for general time series.

Finding all structural break points

Consider the problem of finding of all structural breaks in a given time series. That can be done (reasonably well) with the following procedure.

  1. Chose functions for testing for structural breaks (usually linear.)
  2. Apply Chow Test over dense enough set of regressor points.
  3. Make a time series of the obtained Chow Test statistics.
  4. Find the local maxima of the Chow Test statistics time series.
  5. Determine the most significant break point.
  6. Plot the splits corresponding to the found structural breaks.

QRMon has a function, QRMonFindLocalExtrema, for finding local extrema; see [AAp1, AA1]. For the goal of finding all structural breaks, that semi-symbolic algorithm is the crucial part in the steps above.

Computation

Chose fitting functions

fitFuncs = {1, x};

Find Chow test statistics local maxima

The computation below combines steps 2,3, and 4.

qrObj =
  QRMonUnit[tsSP500]⟹
   QRMonFindChowTestLocalMaxima["Knots" -> 20, 
    "NearestWithOutliers" -> True, 
    "NumberOfProximityPoints" -> 5, "EchoPlots" -> True, 
    "DateListPlot" -> True, 
    ImageSize -> Medium]⟹
   QRMonEchoValue;
ComputationLocalMaxima
ComputationLocalMaxima

Find most significant structural break point

splitPoint = TakeLargestBy[qrObj⟹QRMonTakeValue, #[[2]] &, 1][[1, 1]]

Plot structural breaks splits and corresponding fittings

Here we just make the plots without showing them.

sbPlots =
  QRMonUnit[tsSP500]⟹
   QRMonPlotStructuralBreakSplits[(qrObj⟹ QRMonTakeValue)[[All, 1]], 
    "LeftPartColor" -> Gray, "DateListPlot" -> True, 
    "Echo" -> False, 
    ImageSize -> Medium]⟹
   QRMonTakeValue;
   

The function QRMonPlotStructuralBreakSplits returns an association that has as keys paired split points and Chow Test statistics; the plots are association’s values.

Here we tabulate the plots with plots with most significant breaks shown first.

Multicolumn[
 KeyValueMap[
  Show[#2, PlotLabel -> 
     Grid[{{"Point:", #1[[1]]}, {"Chow Test statistic:", #1[[2]]}}, Alignment -> Left]] &, KeySortBy[sbPlots, -#[[2]] &]], 2]
ComputationStructuralBreaksPlots
ComputationStructuralBreaksPlots

Future plans

We can further apply the algorithm explained above to identifying time series states or components. The structural break points are used as knots in appropriate Quantile Regression fitting. Here is an example.

The plan is to develop such an identifier of time series states in the near future. (And present it at WTC-2019.)

FuturePlansTimeSeriesStates
FuturePlansTimeSeriesStates

References

Articles

[Wk1] Wikipedia entry, Structural breaks.

[Wk2] Wikipedia entry, Chow test.

[AA1] Anton Antonov, “Finding local extrema in noisy data using Quantile Regression”, (2019), MathematicaForPrediction at WordPress.

[AA2] Anton Antonov, “A monad for Quantile Regression workflows”, (2018), MathematicaForPrediction at GitHub.

Packages

[AAp1] Anton Antonov, Monadic Quantile Regression Mathematica package, (2018), MathematicaForPrediction at GitHub.

[AAp2] Anton Antonov, Monadic Structural Breaks Finder Mathematica package, (2019), MathematicaForPrediction at GitHub.

[AAp3] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub.

Videos

[AAv1] Anton Antonov, Structural Breaks with QRMon, (2019), YouTube.

Parametrized event records data transformations

Introduction

In this document we describe transformations of events records data in order to make that data more amenable for the application of Machine Learning (ML) algorithms.

Consider the following problem formulation (done with the next five bullet points.)

  • From data representing a (most likely very) diverse set of events we want to derive contingency matrices corresponding to each of the variables in that data.

  • The events are observations of the values of a certain set of variables for a certain set of entities. Not all entities have events for all variables.

  • The observation times do not form a regular time grid.

  • Each contingency matrix has rows corresponding to the entities in the data and has columns corresponding to time.

  • The software component providing the functionality should allow parametrization and repeated execution. (As in ML classifier training and testing scenarios.)

The phrase “event records data” is used instead of “time series” in order to emphasize that (i) some variables have categorical values, and (ii) the data can be given in some general database form, like transactions long-form.

The required transformations of the event records in the problem formulation above are done through the monad ERTMon, [AAp3]. (The name “ERTMon” comes from “Event Records Transformations Monad”.)

The monad code generation and utilization is explained in [AA1] and implemented with [AAp1].

It is assumed that the event records data is put in a form that makes it (relatively) easy to extract time series for the set of entity-variable pairs present in that data.

In brief ERTMon performs the following sequence of transformations.

  1. The event records of each entity-variable pair are shifted to adhere to a specified start or end point,

  2. The event records for each entity-variable pair are aggregated and normalized with specified functions over a specified regular grid,

  3. Entity vs. time interval contingency matrices are made for each combination of variable and aggregation function.

The transformations are specified with a “computation specification” dataset.

Here is an example of an ERTMon pipeline over event records:

ERTMon-small-pipeline-example
ERTMon-small-pipeline-example

The rest of the document describes in detail:

  • the structure, format, and interpretation of the event records data and computations specifications,

  • the transformations of time series aligning, aggregation, and normalization,

  • the software pattern design – a monad – that allows sequential specifications of desired transformations.

Concrete examples are given using weather data. See [AAp9].

Package load

The following commands load the packages [AAp1-AAp9].

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicEventRecordsTransformations.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/Misc/WeatherEventRecords.m"]

Data load

The data we use is weather data from meteorological stations close to certain major cities. We retrieve the data with the function WeatherEventRecords from the package [AAp9].

?WeatherEventRecords

WeatherEventRecords[ citiesSpec_: {{_String, _String}..}, dateRange:{{_Integer, _Integer, _Integer}, {_Integer, _Integer, _Integer}}, wProps:{_String..} : {“Temperature”}, nStations_Integer : 1 ] gives an association with event records data.

citiesSpec = {{"Miami", "USA"}, {"Chicago", "USA"}, {"London",  "UK"}};
dateRange = {{2017, 7, 1}, {2018, 6, 31}};
wProps = {"Temperature", "MaxTemperature", "Pressure", "Humidity", "WindSpeed"};
res1 = WeatherEventRecords[citiesSpec, dateRange, wProps, 1];

citiesSpec = {{"Jacksonville", "USA"}, {"Peoria", "USA"}, {"Melbourne", "Australia"}};
dateRange = {{2016, 12, 1}, {2017, 12, 31}};
res2 = WeatherEventRecords[citiesSpec, dateRange, wProps, 1];

Here we assign the obtained datasets to variables we use below:

eventRecords = Join[res1["eventRecords"], res2["eventRecords"]];
entityAttributes = Join[res1["entityAttributes"], res2["entityAttributes"]];

Here are the summaries of the datasets eventRecords and entityAttributes:

RecordsSummary[eventRecords]
ERTMon-RecordsSummary-eventRecord
ERTMon-RecordsSummary-eventRecord
RecordsSummary[entityAttributes]
ERTMon-RecordsSummary-entityAttributes
ERTMon-RecordsSummary-entityAttributes

Design considerations

Workflow

The steps of the main event records transformations workflow addressed in this document follow.

  1. Ingest event records and entity attributes given in the Star schema style.

  2. Ingest a computation specification.

    1. Specified are aggregation time intervals, aggregation functions, normalization types and functions.
  3. Group event records based on unique entity ID and variable pairs.
    1. Additional filtering can be applied using the entity attributes.
  4. For each variable find descriptive statistics properties.
    1. This is to facilitate normalization procedures.

    2. Optionally, for each variable find outlier boundaries.

  5. Align each group of records to start or finish at some specified point.

    1. For each variable we want to impose a regular time grid.
  6. From each group of records produce a time series.

  7. For each time series do prescribed aggregation and normalization.

    1. The variable that corresponds to each group of records has at least one (possibly several) computation specifications.
  8. Make a contingency matrix for each time series obtained in the previous step.
    1. The contingency matrices have entity ID’s as rows, and time intervals enumerating values of time intervals.

The following flow-chart corresponds to the list of steps above.

ERTMon-workflows
ERTMon-workflows

A corresponding monadic pipeline is given in the section “Larger example pipeline”.

Feature engineering perspective

The workflow above describes a way to do feature engineering over a collection of event records data. For a given entity ID and a variable we derive several different time series.

Couple of examples follow.

  • One possible derived feature (times series) is for each entity-variable pair we make time series of the hourly mean value in each of the eight most recent hours for that entity. The mean values are normalized by the average values of the records corresponding to that entity-variable pair.

  • Another possible derived feature (time series) is for each entity-variable pair to make a time series with the number of outliers in the each half-hour interval, considering the most recent 20 half-hour intervals. The outliers are found by using outlier boundaries derived by analyzing all values of the corresponding variable, across all entities.

From the examples above – and some others – we conclude that for each feature we want to be able to specify:

  • maximum history length (say from the most recent observation),

  • aggregation interval length,

  • aggregation function (to be applied in each interval),

  • normalization function (per entity, per cohort of entities, per variable),

  • conversion of categorical values into numerical ones.

Repeated execution

We want to be able to do repeated executions of the specified workflow steps.

Consider the following scenario. After the event records data is converted to a entity-vs-feature contingency matrix, we use that matrix to train and test a classifier. We want to find the combination of features that gives the best classifier results. For that reason we want to be able to easily and systematically change the computation specifications (interval size, aggregation and normalization functions, etc.) With different computation specifications we obtain different entity-vs-feature contingency matrices, that would have different performance with different classifiers.

Using the classifier training and testing scenario we see that there is another repeated execution perspective: after the feature engineering is done over the training data, we want to be able to execute exactly the same steps over the test data. Note that with the training data we find certain global or cohort normalization values and outlier boundaries that have to be used over the test data. (Not derived from the test data.)

The following diagram further describes the repeated execution workflow.

ERTMon-repeated-execution-workflow
ERTMon-repeated-execution-workflow

Further discussion of making and using ML classification workflows through the monad software design pattern can be found in [AA2].

Event records data design

The data is structured to follow the style of Star schema. We have event records dataset (table) and entity attributes dataset (table).

The structure datasets (tables) proposed satisfy a wide range of modeling data requirements. (Medical and financial modeling included.)

Entity event data

The entity event data has the columns “EntityID”, “LocationID”, “ObservationTime”, “Variable”, “Value”.

RandomSample[eventRecords, 6]
ERTMon-eventRecords-sample
ERTMon-eventRecords-sample

Most events can be described through “Entity event data”. The entities can be anything that produces a set of event data: financial transactions, vital sign monitors, wind speed sensors, chemical concentrations sensors.

The locations can be anything that gives the events certain “spatial” attributes: medical units in hospitals, sensors geo-locations, tiers of financial transactions.

Entity attributes data

The entity attributes dataset (table) has attributes (immutable properties) of the entities. (Like, gender and race for people, longitude and latitude for wind speed sensors.)

entityAttributes[[1 ;; 6]]
ERTMon-entityAttributes-sample
ERTMon-entityAttributes-sample

Example

For example, here we take all weather stations in USA:

ws = Normal[entityAttributes[Select[#Attribute == "Country" && #Value == "USA" &], "EntityID"]]

(* {"KMFL", "KMDW", "KNIP", "KGEU"} *)

Here we take all temperature event records for those weather stations:

srecs = eventRecords[Select[#Variable == "Temperature" && MemberQ[ws, #EntityID] &]];

And here plot the corresponding time series obtained by grouping the records by station (entity ID’s) and taking the columns “ObservationTime” and “Value”:

grecs = Normal @ GroupBy[srecs, #EntityID &][All, All, {"ObservationTime", "Value"}];
DateListPlot[grecs, ImageSize -> Large, PlotTheme -> "Detailed"]
ERTMon-DateListPlot-USA-Temperature
ERTMon-DateListPlot-USA-Temperature

Monad elements

This section goes through the steps of the general ERTMon workflow. For didactic purposes each sub-section changes the pipeline assigned to the variable p. Of course all functions can be chained into one big pipeline as shown in the section “Larger example pipeline”.

Monad unit

The monad is initialized with ERTMonUnit.

ERTMonUnit[]

(* ERTMon[None, <||>] *)

Ingesting event records and entity attributes

The event records dataset (table) and entity attributes dataset (table) are set with corresponding setter functions. Alternatively, they can be read from files in a specified directory.

p =
  ERTMonUnit[]⟹
   ERTMonSetEventRecords[eventRecords]⟹
   ERTMonSetEntityAttributes[entityAttributes]⟹
   ERTMonEchoDataSummary;
   
ERTMon-echo-data-summary
ERTMon-echo-data-summary

Computation specification

Using the package [AAp3] we can create computation specification dataset. Below is given an example of constructing a fairly complicated computation specification.

The package function EmptyComputationSpecificationRow can be used to construct the rows of the specification.

EmptyComputationSpecificationRow[]

(* <|"Variable" -> Missing[], "Explanation" -> "", 
    "MaxHistoryLength" -> 3600, "AggregationIntervalLength" -> 60, 
    "AggregationFunction" -> "Mean", "NormalizationScope" -> "Entity", 
    "NormalizationFunction" -> "None"|> *)
    
    
compSpecRows = 
  Join[EmptyComputationSpecificationRow[], <|"Variable" -> #, 
      "MaxHistoryLength" -> 60*24*3600, 
      "AggregationIntervalLength" -> 2*24*3600, 
      "AggregationFunction" -> "Mean", 
      "NormalizationScope" -> "Entity", 
      "NormalizationFunction" -> "Mean"|>] & /@ 
   Union[Normal[eventRecords[All, "Variable"]]];
compSpecRows =
  Join[
   compSpecRows, 
   Join[EmptyComputationSpecificationRow[], <|"Variable" -> #, 
       "MaxHistoryLength" -> 60*24*3600, 
       "AggregationIntervalLength" -> 2*24*3600, 
       "AggregationFunction" -> "Range", 
       "NormalizationScope" -> "Country", 
       "NormalizationFunction" -> "Mean"|>] & /@ 
    Union[Normal[eventRecords[All, "Variable"]]],
   Join[EmptyComputationSpecificationRow[], <|"Variable" -> #, 
       "MaxHistoryLength" -> 60*24*3600, 
       "AggregationIntervalLength" -> 2*24*3600, 
       "AggregationFunction" -> "OutliersCount", 
       "NormalizationScope" -> "Variable"|>] & /@ 
    Union[Normal[eventRecords[All, "Variable"]]]
   ];

The constructed rows are assembled into a dataset (with Dataset). The function ProcessComputationSpecification is used to convert a user-made specification dataset into a form used by ERTMon.

wCompSpec = 
 ProcessComputationSpecification[Dataset[compSpecRows]][SortBy[#Variable &]]
 
ERTMon-wCompSpec
ERTMon-wCompSpec

The computation specification is set to the monad with the function ERTMonSetComputationSpecification.

Alternatively, a computation specification can be created and filled-in as a CSV file and read into the monad. (Not described here.)

Grouping event records by entity-variable pairs

With the function ERTMonGroupEntityVariableRecords we group the event records by the found unique entity-variable pairs. Note that in the pipeline below we set the computation specification first.

p =
  p⟹
   ERTMonSetComputationSpecification[wCompSpec]⟹
   ERTMonGroupEntityVariableRecords;

Descriptive statistics (per variable)

After the data is ingested into the monad and the event records are grouped per entity-variable pairs we can find certain descriptive statistics for the data. This is done with the general function ERTMonComputeVariableStatistic and the specialized function ERTMonFindVariableOutlierBoundaries.

p⟹ERTMonComputeVariableStatistic[RecordsSummary]⟹ERTMonEchoValue;
ERTMon-compute-variable-statistic-1
ERTMon-compute-variable-statistic-1
p⟹ERTMonComputeVariableStatistic⟹ERTMonEchoValue;
ERTMon-compute-variable-statistic-2
ERTMon-compute-variable-statistic-2
p⟹ERTMonComputeVariableStatistic[TakeLargest[#, 3] &]⟹ERTMonEchoValue;

(* value: <|Humidity->{1.,1.,0.993}, MaxTemperature->{48,48,48},
            Pressure->{1043.1,1042.8,1041.1}, Temperature->{42.28,41.94,41.89},
            WindSpeed->{54.82,44.63,44.08}|> *)
            
            

Finding the variables outlier boundaries

The finding of outliers counts and fractions can be specified in the computation specification. Because of this there is a specialized function for outlier finding ERTMonFindVariableOutlierBoundaries. That function makes the association of the found variable outlier boundaries (i) to be the pipeline value and (ii) to be the value of context key “variableOutlierBoundaries”. The outlier boundaries are found using the functions of the package [AAp6].

If no argument is specified ERTMonFindVariableOutlierBoundaries uses the Hampel identifier (HampelIdentifierParameters).

p⟹ERTMonFindVariableOutlierBoundaries⟹ERTMonEchoValue;

(* value: <|Humidity->{0.522536,0.869464}, MaxTemperature->{14.2106,31.3494},
            Pressure->{1012.36,1022.44}, Temperature->{9.88823,28.3318},
            WindSpeed->{5.96141,19.4086}|> *)
            
Keys[p⟹ERTMonFindVariableOutlierBoundaries⟹ERTMonTakeContext]

(* {"eventRecords", "entityAttributes", "computationSpecification",
    "entityVariableRecordGroups", "variableOutlierBoundaries"} *)
   

In the rest of document we use the outlier boundaries found with the more conservative identifier SPLUSQuartileIdentifierParameters.

p =
  p⟹
   ERTMonFindVariableOutlierBoundaries[SPLUSQuartileIdentifierParameters]⟹
   ERTMonEchoValue;

(* value: <|Humidity->{0.176,1.168}, MaxTemperature->{-1.67,45.45},
            Pressure->{1003.75,1031.35}, Temperature->{-5.805,43.755},
            WindSpeed->{-5.005,30.555}|> *)

Conversion of event records to time series

The grouped event records are converted into time series with the function ERTMonEntityVariableGroupsToTimeSeries. The time series are aligned to a time point specification given as an argument. The argument can be: a date object, “MinTime”, “MaxTime”, or “None”. (“MaxTime” is the default.)

p⟹
  ERTMonEntityVariableGroupsToTimeSeries["MinTime"]⟹
  ERTMonEchoFunctionContext[#timeSeries[[{1, 3, 5}]] &];
  
ERTMon-records-groups-minTime
ERTMon-records-groups-minTime

Compare the last output with the output of the following command.

p =
  p⟹
   ERTMonEntityVariableGroupsToTimeSeries["MaxTime"]⟹
   ERTMonEchoFunctionContext[#timeSeries[[{1, 3, 5}]] &];
ERTMon-records-groups-maxTime
ERTMon-records-groups-maxTime

Time series restriction and aggregation.

The main goal of ERTMon is to convert a diverse, general collection of event records into a collection of aligned time series over specified regular time grids.

The regular time grids are specified with the columns “MaxHistoryLength” and “AggregationIntervalLength” of the computation specification. The time series of the variables in the computation specification are restricted to the corresponding maximum history lengths and are aggregated using the corresponding aggregation lengths and functions.

p =
  p⟹
   ERTMonAggregateTimeSeries⟹
   ERTMonEchoFunctionContext[DateListPlot /@ #timeSeries[[{1, 3, 5}]] &];
   
ERTMon-restriction-and-aggregation
ERTMon-restriction-and-aggregation

Application of time series functions

At this point we can apply time series modifying functions. An often used such function is moving average.

p⟹
  ERTMonApplyTimeSeriesFunction[MovingAverage[#, 6] &]⟹
  ERTMonEchoFunctionValue[DateListPlot /@ #[[{1, 3, 5}]] &];
ERTMon-moving-average-application
ERTMon-moving-average-application

Note that the result is given as a pipeline value, the value of the context key “timeSeries” is not changed.

(In the future, the computation specification and its handling might be extended to handle moving average or other time series function specifications.)

Normalization

With “normalization” we mean that the values of a given time series values are divided (normalized) with a descriptive statistic derived from a specified set of values. The specified set of values is given with the parameter “NormalizationScope” in computation specification.

At the normalization stage each time series is associated with an entity ID and a variable.

Normalization is done at three different scopes: “entity”, “attribute”, and “variable”.

Given a time series T(i,var) corresponding to entity ID i and a variable var we define the normalization values for the different scopes in the following ways.

  • Normalization with scope “entity” means that the descriptive statistic is derived from the values of T(i,var) only.

  • Normalization with scope attribute means that

    • from the entity attributes dataset we find attribute value that corresponds to i,

    • next we find all entity ID’s that are associated with the same attribute value,

    • next we find value of normalization descriptive statistic using the time series that correspond to the variable var and the entity ID’s found in the previous step.

  • Normalization with scope “variable” means that the descriptive statistic is derived from the values of all time series corresponding to var.

Note that the scope “entity” is the most granular, and the scope “variable” is the coarsest.

The following command demonstrates the normalization effect – compare the y-axes scales of the time series corresponding to the same entity-variable pair.

p =
  p⟹
   ERTMonEchoFunctionContext[DateListPlot /@ #timeSeries[[{1, 3, 5}]] &]⟹
   ERTMonNormalize⟹
   ERTMonEchoFunctionContext[DateListPlot /@ #timeSeries[[{1, 3, 5}]] &];
   
ERTMon-normalization
ERTMon-normalization

Here are the normalization values that should be used when normalizing “unseen data.”

p⟹ERTMonTakeNormalizationValues

(* <|{"Humidity.Range", "Country", "USA"} -> 0.0864597, 
  {"Humidity.Range", "Country", "UK"} -> 0.066, 
  {"Humidity.Range", "Country", "Australia"} -> 0.145968, 
  {"MaxTemperature.Range", "Country", "USA"} -> 2.85468, 
  {"MaxTemperature.Range", "Country", "UK"} -> 78/31, 
  {"MaxTemperature.Range", "Country", "Australia"} -> 3.28871, 
  {"Pressure.Range", "Country", "USA"} -> 2.08222, 
  {"Pressure.Range", "Country", "Australia"} -> 3.33871, 
  {"Temperature.Range", "Country", "USA"} -> 2.14411, 
  {"Temperature.Range", "Country", "UK"} -> 1.25806, 
  {"Temperature.Range", "Country", "Australia"} -> 2.73032, 
  {"WindSpeed.Range", "Country", "USA"} -> 4.13532, 
  {"WindSpeed.Range", "Country", "UK"} -> 3.62097, 
  {"WindSpeed.Range", "Country", "Australia"} -> 3.17226|> *)

Making contingency matrices

One of the main goals of ERTMon is to produce contingency matrices corresponding to the event records data.

The contingency matrices are created and stored as SSparseMatrix objects, [AAp7].

p =
  p⟹ERTMonMakeContingencyMatrices;

We can obtain an association of the contingency matrices for each variable-and-aggregation-function pair, or obtain the overall contingency matrix.

p⟹ERTMonTakeContingencyMatrices
Dimensions /@ %
ERTMon-contingency-matrices
ERTMon-contingency-matrices
smat = p⟹ERTMonTakeContingencyMatrix;
MatrixPlot[smat, ImageSize -> 700]
ERTMon-contingency-matrix
ERTMon-contingency-matrix
RowNames[smat]

(* {"EGLC", "KGEU", "KMDW", "KMFL", "KNIP", "WMO95866"} *)

Larger example pipeline

The pipeline shown in this section utilizes all main workflow functions of ERTMon. The used weather data and computation specification are described above.

ERTMon-large-pipeline-example
ERTMon-large-pipeline-example
ERTMon-large-pipeline-example-output
ERTMon-large-pipeline-example-output

References

Packages

[AAp1] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub*. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

[AAp2] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub*. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

[AAp3] Anton Antonov, Monadic Event Records Transformations Mathematica package, (2018), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicEventRecordsTransformations.m .

[AAp4] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

[AAp5] Anton Antonov, Cross tabulation implementation in Mathematica, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/CrossTabulate.m .

[AAp6] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/OutlierIdentifiers.m.

[AAp7] Anton Antonov, SSparseMatrix Mathematica package, (2018), MathematicaForPrediction at GitHub*. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/SSparseMatrix.m .

[AAp8] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

[AAp9] Anton Antonov, Weather event records data Mathematica package, (2018), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/Misc/WeatherEventRecords.m .

Documents

[AA1] Anton Antonov, Monad code generation and extension, (2017), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2017/06/23/monad-code-generation-and-extension .

[AA1a] Anton Antonov, Monad code generation and extension, (2017), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, A monad for classification workflows, (2018), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2018/05/15/a-monad-for-classification-workflows .

QRMon for some credit risk article

Introduction

In this notebook/document we apply the monad QRMon [3] over data of the article [1]. In order to get the data we use extraction procedure described in [2].

(I saw the article [1] while browsing LinkedIn today. I met one of the authors during the event "Data Science Salon Miami Feb 2018".)

Extract data

I extracted the data from the image using "Recovering data points from an image".

img = Import["https://www.spglobal.com/_assets/images/marketintelligence/blog-images/demonstration-of-model-fit-comparison-visualization.png"]
enter image description here

enter image description here

extractedData
(* {{-1., 0.284894}, {-0.987395, 0.340483}, {-0.966387, 
  0.215408}, {-0.941176, 0.416918}, {-0.894958, 0.222356}, {-0.890756,
   0.215408}, {-0.878151, 0.0903323}, {-0.848739, 
  0.132024}, {-0.844538, 0.10423}, {-0.831933, 0.333535}, {-0.819328, 
  0.180665}, {-0.781513, 0.423867}, {-0.756303, 0.40997}, {-0.752101, 
  0.528097}, {-0.747899, 0.416918}, {-0.731092, 0.375227}, {-0.714286,
   0.194562}, {-0.710084, 0.340483}, {-0.651261, 
  0.555891}, {-0.647059, 0.333535}, {-0.605042, 0.496828}, {-0.57563, 
  0.}, {-0.512605, 0.354381}, {-0.491597, 0.368278}, {-0.487395, 
  0.472508}, {-0.478992, 0.479456}, {-0.453782, 0.437764}, {-0.357143,
   0.15287}, {-0.344538, 0.340483}, {-0.331933, 0.333535}, {-0.315126,
   0.500302}, {-0.285714, 0.396073}, {-0.247899, 
  0.618429}, {-0.201681, 0.541994}, {-0.159664, 0.680967}, {-0.10084, 
  1.06314}, {-0.0966387, 0.993656}, {0., 1.36193}, {0.0210084, 
  1.44532}, {0.0420168, 1.5148}, {0.0504202, 1.5148}, {0.0882353, 
  1.41405}, {0.130252, 1.70937}, {0.172269, 2.029}, {0.176471, 
  1.7858}, {0.222689, 2.20272}, {0.226891, 2.23746}, {0.231092, 
  2.23746}, {0.239496, 1.96647}, {0.268908, 1.94562}, {0.273109, 
  1.91088}, {0.277311, 1.91088}, {0.281513, 1.94562}, {0.294118, 
  2.2861}, {0.319328, 2.26526}, {0.327731, 2.3}, {0.432773, 
  1.68157}, {0.462185, 1.86918}, {0.5, 2.00121}} *)

ListPlot[extractedData, PlotRange -> All, PlotTheme -> "Detailed"]
enter image description here

enter image description here

Apply QRMon

Load packages. (For more details see [4].)

Import["https://raw.githubusercontent.com/antononcube/\
MathematicaForPrediction/master/MonadicProgramming/\
MonadicQuantileRegression.m"]
Import["https://raw.githubusercontent.com/antononcube/\
MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

Apply the QRMon workflow within the TraceMonad:

TraceMonadUnit[QRMonUnit[extractedData]]⟹"lift data to the monad"⟹
  QRMonEchoDataSummary⟹"echo data summary"⟹
  QRMonQuantileRegression[12, 0.5]⟹"do Quantile Regression with\nB-spline basis with 12 knots"⟹
  QRMonPlot⟹"plot the data and regression curve"⟹
  QRMonEcho[Style["Tabulate QRMon steps and explanations:", Purple, Bold]]⟹"echo an explanation message"⟹
  TraceMonadEchoGrid;
enter image description here

enter image description here

References

[1] Moody Hadi and Danny Haydon, "A Perspective On Machine Learning In Credit Risk", (2018), S&P Global Market Intelligence.

[2] Andy Ross, answer of "Recovering data points from an image", (2012).

[3] Anton Antonov, "A monad for Quantile Regression workflows", (2018), MathematicaForPrediction at WordPress.

[4] Anton Antonov, "Monad code generation and extension", (2017), MathematicaForPrediction at GitHub*,https://github.com/antononcube/MathematicaForPrediction.