NY Times COVID-19 data visualization

Yesterday in one of the forums I frequent it was announced that New York Times has published COVID-19 data on GitHub. I decided to make a Mathematica notebook that gives data links and related code for data ingestions. (And rudimentary data analysis.)

Here is the Markdown version of the notebook: “NY Times COVID-19 data visualization”.

Here is a screenshot of the WL notebook that also links to it:

Screenshot of an interactive interface:

WirVsVirus 2020 hackathon participation

Introduction

Last weekend – 2020-03-20 ÷ 2020-03-22 – I participated in the (Germany-centric) hackathon WirVsVirus. (I friend of mine who lives in Germany asked me to team up and sign up.)

Our idea proposal was accepted, listed in the dedicated overview table (see item 806). The title of our hackathon project is:

“Geo-spatial-temporal Economic Model for COVID-19 Propagation and Management in Germany”

Nearly a dozen of people enlisted to help. (We communicated through Slack.)

``````WebImage["https://devpost.com/software/geo-raumlich-zeitliches-\
wirtschaftsmodell-fur-covid-19"]``````

Multiple people helped with the discussion of ideas, directions where to find data, with actual data gathering, and related documented analysis. Of course, just discussing the proposed solutions was already a great help!

What was accomplished

Work plans

The following mind-map reflects pretty well what was planned and done:

There is also a related org-mode file with the work plan.

Data

I obtained Germany city data with Mathematica’s build-in functions and used it to heuristically derive a traveling patterns graph, [AA1].

Here is the data:

``````dsCityRecords =
ResourceFunction["ImportCSVToDataset"][
"https://raw.githubusercontent.com/antononcube/SystemModeling/\
master/Data/dfGermanyCityRecords.csv"];
Dimensions[dsCityRecords]

(*{12538, 6}*)``````

Here is Geo-histogram of that data:

``````aCoordsToPopulations =
Values /@ Normal[dsCityRecords[All, {"Lat", "Lon"}]],
Normal[dsCityRecords[All, "Population"]]];
ColorFunction -> (Opacity[#, Blue] &), PlotLegends -> Automatic];
If[TrueQ[renderGraphPlotsQ], grHist]``````

We considered a fair amount of other data. But because of the time limitations of the hackathon we had to use only the one above.

Single-site models

During the development phase I used the model SEI2R, but since we wanted to have a “geo-spatial-temporal epidemiological economics model” I productized the implementation of SEI2HR-Econ, [AAp1].

Here are the stocks, rates, and equations of SEI2HR-Econ:

``Magnify[ModelGridTableForm[SEI2HREconModel[t]], 0.85]``

Multi-site SEI2R (SEI2HR-Econ) over a hexagonal grid graph

I managed to follow through with a large part of the work plan for the hackathon and make multi-site scaled model that “follows the money”, [AA1]. Here is a diagram that shows the travelling patterns graph and solutions at one of the nodes:

Here is (a snapshot of) an interactive interface for studying and investigating the solution results:

For more details see the notebook [AA1]. Different parameters can be set in the “Parameters” section. Especially of interest are the quarantine related parameters: start, duration, effect on contact rates and traffic patterns.

I also put in that notebook code for exporting simulations results and programmed visualization routines in R, [AA2]. (In order other members of team to be able to explore the results.)

References

[DP1] 47_wirtschaftliche Auswirkung_Geo-spatial-temp-econ-modell, DevPost.

[WRI1] Wolfram Research, Inc., Germany city data records, (2020), SystemModeling at GitHub.

[AA1] Anton Antonov, “WirVsVirus hackathon multi-site SEI2R over a hexagonal grid graph”, (2020), SystemModeling at GitHub.

[AA2] Anton Antonov, “WirVsVirus-Hackathon in R”, (2020), SystemModeling at GitHub.

[AAp1] Anton Antonov, “Epidemiology models Mathematica package”, (2020), SystemsModeling at GitHub.

Conference abstracts similarities

Introduction

In this MathematicaVsR project we discuss and exemplify finding and analyzing similarities between texts using Latent Semantic Analysis (LSA). Both Mathematica and R codes are provided.

The LSA workflows are constructed and executed with the software monads `LSAMon-WL`, [AA1, AAp1], and `LSAMon-R`, [AAp2].

The illustrating examples are based on conference abstracts from rstudio::conf and Wolfram Technology Conference (WTC), [AAd1, AAd2]. Since the number of rstudio::conf abstracts is small and since rstudio::conf 2020 is about to start at the time of preparing this project we focus on words and texts from RStudio’s ecosystem of packages and presentations.

Statistical thesaurus for words from RStudio’s ecosystem

Consider the focus words:

``{"cloud","rstudio","package","tidyverse","dplyr","analyze","python","ggplot2","markdown","sql"}``

Here is a statistical thesaurus for those words:

Remark: Note that the computed thesaurus entries seem fairly “R-flavored.”

Similarity analysis diagrams

As expected the abstracts from rstudio::conf tend to cluster closely – note the square formed top-left in the plot of a similarity matrix based on extracted topics:

Here is a similarity graph based on the matrix above:

Here is a clustering (by “graph communities”) of the sub-graph highlighted in the plot above:

Comparison observations

LSA pipelines specifications

The packages `LSAMon-WL`, [AAp1], and `LSAMon-R`, [AAp2], make the comparison easy – the codes of the specified workflows are nearly identical.

Here is the Mathematica code:

``````lsaObj =
LSAMonMakeDocumentTermMatrix[{}, Automatic]⟹
LSAMonEchoDocumentTermMatrixStatistics⟹
LSAMonApplyTermWeightFunctions["IDF", "TermFrequency", "Cosine"]⟹
LSAMonExtractTopics["NumberOfTopics" -> 36, "MinNumberOfDocumentsPerTerm" -> 2, Method -> "ICA", MaxSteps -> 200]⟹
LSAMonEchoTopicsTable["NumberOfTableColumns" -> 6];``````

Here is the R code:

``````lsaObj <-
LSAMonUnit(lsDescriptions) %>%
LSAMonMakeDocumentTermMatrix( stemWordsQ = FALSE, stopWords = stopwords::stopwords() ) %>%
LSAMonApplyTermWeightFunctions( "IDF", "TermFrequency", "Cosine" )
LSAMonExtractTopics( numberOfTopics = 36, minNumberOfDocumentsPerTerm = 5, method = "NNMF", maxSteps = 20, profilingQ = FALSE ) %>%
LSAMonEchoTopicsTable( numberOfTableColumns = 6, wideFormQ = TRUE ) ``````

Graphs and graphics

Mathematica’s built-in graph functions make the exploration of the similarities much easier. (Than using R.)

Mathematica’s matrix plots provide more control and are more readily informative.

Sparse matrix objects with named rows and columns

R’s built-in sparse matrices with named rows and columns are great. `LSAMon-WL` utilizes a similar, specially implemented sparse matrix object, see [AA1, AAp3].

References

Articles

[AA1] Anton Antonov, A monad for Latent Semantic Analysis workflows, (2019), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, Text similarities through bags of words, (2020), SimplifiedMachineLearningWorkflows-book at GitHub.

Data

[AAd1] Anton Antonov, RStudio::conf-2019-abstracts.csv, (2020), SimplifiedMachineLearningWorkflows-book at GitHub.

[AAd2] Anton Antonov, Wolfram-Technology-Conference-2016-to-2019-abstracts.csv, (2020), SimplifiedMachineLearningWorkflows-book at GitHub.

Packages

[AAp1] Anton Antonov, Monadic Latent Semantic Analysis Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp2] Anton Antonov, Latent Semantic Analysis Monad R package, (2019), R-packages at GitHub.

[AAp3] Anton Antonov, SSparseMatrix Mathematica package, (2018), MathematicaForPrediction at GitHub.

3D ornaments (by texturized polygons)

In brief

There are some recent attempts on the Wolfram Community site to model Christmas trees.

Well here I show a way to make Christmas ornaments like these:

The graphics above were made with the recently submitted WFR function `TexturizePolygons` which utilizes WL’s `Texture` functionalities,
`PolyhedronData`, and the WFR function`RandomMandala`.

(More random mandalas can be found in this Community post: “Random mandalas generation”.)

Some details

Both 2D and 3D graphics can be produced with `TexturizePolygons`:

```BlockRandom[
TexturizePolygons[{"SnubCube", #}, "Radius" -> Sqrt[{6, 4, 2}], ColorFunction -> "TemperatureMap", ImageSize -> Large],
RandomSeeding -> 12] &
/@ {"Net", "Faces"}
```

The animations above were generated using calls like this:

```SeedRandom[38];
TexturizePolygons["SnubCube", Automatic, "Radius" -> Sqrt[{8, 4, 2}],
ColorFunction -> "Rainbow", ImageSize -> Large, Background -> Black]
```

and this:

```TexturizePolygons["GreatRhombicosidodecahedron", Automatic,
"Radius" -> Sqrt[{6, 4, 2}], ColorFunction -> "Rainbow",
ImageSize -> Large, Background -> Black,
ViewCenter -> {0.5, 0.5, 0.5}, SphericalRegion -> True]
```

More images can be found in this Imgur post: “Polyhedrons texturized with random mandalas”.

Parametrized event records data transformations

Introduction

In this document we describe transformations of events records data in order to make that data more amenable for the application of Machine Learning (ML) algorithms.

Consider the following problem formulation (done with the next five bullet points.)

• From data representing a (most likely very) diverse set of events we want to derive contingency matrices corresponding to each of the variables in that data.

• The events are observations of the values of a certain set of variables for a certain set of entities. Not all entities have events for all variables.

• The observation times do not form a regular time grid.

• Each contingency matrix has rows corresponding to the entities in the data and has columns corresponding to time.

• The software component providing the functionality should allow parametrization and repeated execution. (As in ML classifier training and testing scenarios.)

The phrase “event records data” is used instead of “time series” in order to emphasize that (i) some variables have categorical values, and (ii) the data can be given in some general database form, like transactions long-form.

The required transformations of the event records in the problem formulation above are done through the monad `ERTMon`, [AAp3]. (The name “ERTMon” comes from “Event Records Transformations Monad”.)

The monad code generation and utilization is explained in [AA1] and implemented with [AAp1].

It is assumed that the event records data is put in a form that makes it (relatively) easy to extract time series for the set of entity-variable pairs present in that data.

In brief `ERTMon` performs the following sequence of transformations.

1. The event records of each entity-variable pair are shifted to adhere to a specified start or end point,

2. The event records for each entity-variable pair are aggregated and normalized with specified functions over a specified regular grid,

3. Entity vs. time interval contingency matrices are made for each combination of variable and aggregation function.

The transformations are specified with a “computation specification” dataset.

Here is an example of an `ERTMon` pipeline over event records:

The rest of the document describes in detail:

• the structure, format, and interpretation of the event records data and computations specifications,

• the transformations of time series aligning, aggregation, and normalization,

• the software pattern design – a monad – that allows sequential specifications of desired transformations.

Concrete examples are given using weather data. See [AAp9].

The following commands load the packages [AAp1-AAp9].

``````Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicEventRecordsTransformations.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/Misc/WeatherEventRecords.m"]``````

The data we use is weather data from meteorological stations close to certain major cities. We retrieve the data with the function `WeatherEventRecords` from the package [AAp9].

``?WeatherEventRecords``

WeatherEventRecords[ citiesSpec_: {{_String, _String}..}, dateRange:{{_Integer, _Integer, _Integer}, {_Integer, _Integer, _Integer}}, wProps:{_String..} : {“Temperature”}, nStations_Integer : 1 ] gives an association with event records data.

``````citiesSpec = {{"Miami", "USA"}, {"Chicago", "USA"}, {"London",  "UK"}};
dateRange = {{2017, 7, 1}, {2018, 6, 31}};
wProps = {"Temperature", "MaxTemperature", "Pressure", "Humidity", "WindSpeed"};
res1 = WeatherEventRecords[citiesSpec, dateRange, wProps, 1];

citiesSpec = {{"Jacksonville", "USA"}, {"Peoria", "USA"}, {"Melbourne", "Australia"}};
dateRange = {{2016, 12, 1}, {2017, 12, 31}};
res2 = WeatherEventRecords[citiesSpec, dateRange, wProps, 1];``````

Here we assign the obtained datasets to variables we use below:

``````eventRecords = Join[res1["eventRecords"], res2["eventRecords"]];
entityAttributes = Join[res1["entityAttributes"], res2["entityAttributes"]];``````

Here are the summaries of the datasets `eventRecords` and `entityAttributes`:

``RecordsSummary[eventRecords]``
``RecordsSummary[entityAttributes]``

Design considerations

Workflow

The steps of the main event records transformations workflow addressed in this document follow.

1. Ingest event records and entity attributes given in the Star schema style.

2. Ingest a computation specification.

1. Specified are aggregation time intervals, aggregation functions, normalization types and functions.
3. Group event records based on unique entity ID and variable pairs.
1. Additional filtering can be applied using the entity attributes.
4. For each variable find descriptive statistics properties.
1. This is to facilitate normalization procedures.

2. Optionally, for each variable find outlier boundaries.

5. Align each group of records to start or finish at some specified point.

1. For each variable we want to impose a regular time grid.
6. From each group of records produce a time series.

7. For each time series do prescribed aggregation and normalization.

1. The variable that corresponds to each group of records has at least one (possibly several) computation specifications.
8. Make a contingency matrix for each time series obtained in the previous step.
1. The contingency matrices have entity ID’s as rows, and time intervals enumerating values of time intervals.

The following flow-chart corresponds to the list of steps above.

A corresponding monadic pipeline is given in the section “Larger example pipeline”.

Feature engineering perspective

The workflow above describes a way to do feature engineering over a collection of event records data. For a given entity ID and a variable we derive several different time series.

Couple of examples follow.

• One possible derived feature (times series) is for each entity-variable pair we make time series of the hourly mean value in each of the eight most recent hours for that entity. The mean values are normalized by the average values of the records corresponding to that entity-variable pair.

• Another possible derived feature (time series) is for each entity-variable pair to make a time series with the number of outliers in the each half-hour interval, considering the most recent 20 half-hour intervals. The outliers are found by using outlier boundaries derived by analyzing all values of the corresponding variable, across all entities.

From the examples above – and some others – we conclude that for each feature we want to be able to specify:

• maximum history length (say from the most recent observation),

• aggregation interval length,

• aggregation function (to be applied in each interval),

• normalization function (per entity, per cohort of entities, per variable),

• conversion of categorical values into numerical ones.

Repeated execution

We want to be able to do repeated executions of the specified workflow steps.

Consider the following scenario. After the event records data is converted to a entity-vs-feature contingency matrix, we use that matrix to train and test a classifier. We want to find the combination of features that gives the best classifier results. For that reason we want to be able to easily and systematically change the computation specifications (interval size, aggregation and normalization functions, etc.) With different computation specifications we obtain different entity-vs-feature contingency matrices, that would have different performance with different classifiers.

Using the classifier training and testing scenario we see that there is another repeated execution perspective: after the feature engineering is done over the training data, we want to be able to execute exactly the same steps over the test data. Note that with the training data we find certain global or cohort normalization values and outlier boundaries that have to be used over the test data. (Not derived from the test data.)

The following diagram further describes the repeated execution workflow.

Further discussion of making and using ML classification workflows through the monad software design pattern can be found in [AA2].

Event records data design

The data is structured to follow the style of Star schema. We have event records dataset (table) and entity attributes dataset (table).

The structure datasets (tables) proposed satisfy a wide range of modeling data requirements. (Medical and financial modeling included.)

Entity event data

The entity event data has the columns “EntityID”, “LocationID”, “ObservationTime”, “Variable”, “Value”.

``RandomSample[eventRecords, 6]``

Most events can be described through “Entity event data”. The entities can be anything that produces a set of event data: financial transactions, vital sign monitors, wind speed sensors, chemical concentrations sensors.

The locations can be anything that gives the events certain “spatial” attributes: medical units in hospitals, sensors geo-locations, tiers of financial transactions.

Entity attributes data

The entity attributes dataset (table) has attributes (immutable properties) of the entities. (Like, gender and race for people, longitude and latitude for wind speed sensors.)

``entityAttributes[[1 ;; 6]]``

Example

For example, here we take all weather stations in USA:

``````ws = Normal[entityAttributes[Select[#Attribute == "Country" && #Value == "USA" &], "EntityID"]]

(* {"KMFL", "KMDW", "KNIP", "KGEU"} *)``````

Here we take all temperature event records for those weather stations:

``srecs = eventRecords[Select[#Variable == "Temperature" && MemberQ[ws, #EntityID] &]];``

And here plot the corresponding time series obtained by grouping the records by station (entity ID’s) and taking the columns “ObservationTime” and “Value”:

``````grecs = Normal @ GroupBy[srecs, #EntityID &][All, All, {"ObservationTime", "Value"}];
DateListPlot[grecs, ImageSize -> Large, PlotTheme -> "Detailed"]``````

This section goes through the steps of the general `ERTMon` workflow. For didactic purposes each sub-section changes the pipeline assigned to the variable `p`. Of course all functions can be chained into one big pipeline as shown in the section “Larger example pipeline”.

The monad is initialized with ERTMonUnit.

``````ERTMonUnit[]

(* ERTMon[None, <||>] *)``````

Ingesting event records and entity attributes

The event records dataset (table) and entity attributes dataset (table) are set with corresponding setter functions. Alternatively, they can be read from files in a specified directory.

``````p =
ERTMonUnit[]⟹
ERTMonSetEventRecords[eventRecords]⟹
ERTMonSetEntityAttributes[entityAttributes]⟹
ERTMonEchoDataSummary;
``````

Computation specification

Using the package [AAp3] we can create computation specification dataset. Below is given an example of constructing a fairly complicated computation specification.

The package function `EmptyComputationSpecificationRow` can be used to construct the rows of the specification.

``````EmptyComputationSpecificationRow[]

(* <|"Variable" -> Missing[], "Explanation" -> "",
"MaxHistoryLength" -> 3600, "AggregationIntervalLength" -> 60,
"AggregationFunction" -> "Mean", "NormalizationScope" -> "Entity",
"NormalizationFunction" -> "None"|> *)

compSpecRows =
Join[EmptyComputationSpecificationRow[], <|"Variable" -> #,
"MaxHistoryLength" -> 60*24*3600,
"AggregationIntervalLength" -> 2*24*3600,
"AggregationFunction" -> "Mean",
"NormalizationScope" -> "Entity",
"NormalizationFunction" -> "Mean"|>] & /@
Union[Normal[eventRecords[All, "Variable"]]];
compSpecRows =
Join[
compSpecRows,
Join[EmptyComputationSpecificationRow[], <|"Variable" -> #,
"MaxHistoryLength" -> 60*24*3600,
"AggregationIntervalLength" -> 2*24*3600,
"AggregationFunction" -> "Range",
"NormalizationScope" -> "Country",
"NormalizationFunction" -> "Mean"|>] & /@
Union[Normal[eventRecords[All, "Variable"]]],
Join[EmptyComputationSpecificationRow[], <|"Variable" -> #,
"MaxHistoryLength" -> 60*24*3600,
"AggregationIntervalLength" -> 2*24*3600,
"AggregationFunction" -> "OutliersCount",
"NormalizationScope" -> "Variable"|>] & /@
Union[Normal[eventRecords[All, "Variable"]]]
];``````

The constructed rows are assembled into a dataset (with `Dataset`). The function `ProcessComputationSpecification` is used to convert a user-made specification dataset into a form used by `ERTMon`.

``````wCompSpec =
ProcessComputationSpecification[Dataset[compSpecRows]][SortBy[#Variable &]]
``````

The computation specification is set to the monad with the function `ERTMonSetComputationSpecification`.

Alternatively, a computation specification can be created and filled-in as a CSV file and read into the monad. (Not described here.)

Grouping event records by entity-variable pairs

With the function `ERTMonGroupEntityVariableRecords` we group the event records by the found unique entity-variable pairs. Note that in the pipeline below we set the computation specification first.

``````p =
p⟹
ERTMonSetComputationSpecification[wCompSpec]⟹
ERTMonGroupEntityVariableRecords;``````

Descriptive statistics (per variable)

After the data is ingested into the monad and the event records are grouped per entity-variable pairs we can find certain descriptive statistics for the data. This is done with the general function ERTMonComputeVariableStatistic and the specialized function ERTMonFindVariableOutlierBoundaries.

``p⟹ERTMonComputeVariableStatistic[RecordsSummary]⟹ERTMonEchoValue;``
``p⟹ERTMonComputeVariableStatistic⟹ERTMonEchoValue;``
``````p⟹ERTMonComputeVariableStatistic[TakeLargest[#, 3] &]⟹ERTMonEchoValue;

(* value: <|Humidity->{1.,1.,0.993}, MaxTemperature->{48,48,48},
Pressure->{1043.1,1042.8,1041.1}, Temperature->{42.28,41.94,41.89},
WindSpeed->{54.82,44.63,44.08}|> *)

``````

Finding the variables outlier boundaries

The finding of outliers counts and fractions can be specified in the computation specification. Because of this there is a specialized function for outlier finding `ERTMonFindVariableOutlierBoundaries`. That function makes the association of the found variable outlier boundaries (i) to be the pipeline value and (ii) to be the value of context key “variableOutlierBoundaries”. The outlier boundaries are found using the functions of the package [AAp6].

If no argument is specified ERTMonFindVariableOutlierBoundaries uses the Hampel identifier (`HampelIdentifierParameters`).

``````p⟹ERTMonFindVariableOutlierBoundaries⟹ERTMonEchoValue;

(* value: <|Humidity->{0.522536,0.869464}, MaxTemperature->{14.2106,31.3494},
Pressure->{1012.36,1022.44}, Temperature->{9.88823,28.3318},
WindSpeed->{5.96141,19.4086}|> *)

Keys[p⟹ERTMonFindVariableOutlierBoundaries⟹ERTMonTakeContext]

(* {"eventRecords", "entityAttributes", "computationSpecification",
"entityVariableRecordGroups", "variableOutlierBoundaries"} *)
``````

In the rest of document we use the outlier boundaries found with the more conservative identifier `SPLUSQuartileIdentifierParameters`.

``````p =
p⟹
ERTMonFindVariableOutlierBoundaries[SPLUSQuartileIdentifierParameters]⟹
ERTMonEchoValue;

(* value: <|Humidity->{0.176,1.168}, MaxTemperature->{-1.67,45.45},
Pressure->{1003.75,1031.35}, Temperature->{-5.805,43.755},
WindSpeed->{-5.005,30.555}|> *)``````

Conversion of event records to time series

The grouped event records are converted into time series with the function `ERTMonEntityVariableGroupsToTimeSeries`. The time series are aligned to a time point specification given as an argument. The argument can be: a date object, “MinTime”, “MaxTime”, or “None”. (“MaxTime” is the default.)

``````p⟹
ERTMonEntityVariableGroupsToTimeSeries["MinTime"]⟹
ERTMonEchoFunctionContext[#timeSeries[[{1, 3, 5}]] &];
``````

Compare the last output with the output of the following command.

``````p =
p⟹
ERTMonEntityVariableGroupsToTimeSeries["MaxTime"]⟹
ERTMonEchoFunctionContext[#timeSeries[[{1, 3, 5}]] &];``````

Time series restriction and aggregation.

The main goal of `ERTMon` is to convert a diverse, general collection of event records into a collection of aligned time series over specified regular time grids.

The regular time grids are specified with the columns “MaxHistoryLength” and “AggregationIntervalLength” of the computation specification. The time series of the variables in the computation specification are restricted to the corresponding maximum history lengths and are aggregated using the corresponding aggregation lengths and functions.

``````p =
p⟹
ERTMonAggregateTimeSeries⟹
ERTMonEchoFunctionContext[DateListPlot /@ #timeSeries[[{1, 3, 5}]] &];
``````

Application of time series functions

At this point we can apply time series modifying functions. An often used such function is moving average.

``````p⟹
ERTMonApplyTimeSeriesFunction[MovingAverage[#, 6] &]⟹
ERTMonEchoFunctionValue[DateListPlot /@ #[[{1, 3, 5}]] &];``````

Note that the result is given as a pipeline value, the value of the context key “timeSeries” is not changed.

(In the future, the computation specification and its handling might be extended to handle moving average or other time series function specifications.)

Normalization

With “normalization” we mean that the values of a given time series values are divided (normalized) with a descriptive statistic derived from a specified set of values. The specified set of values is given with the parameter “NormalizationScope” in computation specification.

At the normalization stage each time series is associated with an entity ID and a variable.

Normalization is done at three different scopes: “entity”, “attribute”, and “variable”.

Given a time series $T(i,var)$ corresponding to entity ID $i$ and a variable $var$ we define the normalization values for the different scopes in the following ways.

• Normalization with scope “entity” means that the descriptive statistic is derived from the values of $T(i,var)$ only.

• Normalization with scope attribute means that

• from the entity attributes dataset we find attribute value that corresponds to $i$,

• next we find all entity ID’s that are associated with the same attribute value,

• next we find value of normalization descriptive statistic using the time series that correspond to the variable $var$ and the entity ID’s found in the previous step.

• Normalization with scope “variable” means that the descriptive statistic is derived from the values of all time series corresponding to $var$.

Note that the scope “entity” is the most granular, and the scope “variable” is the coarsest.

The following command demonstrates the normalization effect – compare the $y$-axes scales of the time series corresponding to the same entity-variable pair.

``````p =
p⟹
ERTMonEchoFunctionContext[DateListPlot /@ #timeSeries[[{1, 3, 5}]] &]⟹
ERTMonNormalize⟹
ERTMonEchoFunctionContext[DateListPlot /@ #timeSeries[[{1, 3, 5}]] &];
``````

Here are the normalization values that should be used when normalizing “unseen data.”

``````p⟹ERTMonTakeNormalizationValues

(* <|{"Humidity.Range", "Country", "USA"} -> 0.0864597,
{"Humidity.Range", "Country", "UK"} -> 0.066,
{"Humidity.Range", "Country", "Australia"} -> 0.145968,
{"MaxTemperature.Range", "Country", "USA"} -> 2.85468,
{"MaxTemperature.Range", "Country", "UK"} -> 78/31,
{"MaxTemperature.Range", "Country", "Australia"} -> 3.28871,
{"Pressure.Range", "Country", "USA"} -> 2.08222,
{"Pressure.Range", "Country", "Australia"} -> 3.33871,
{"Temperature.Range", "Country", "USA"} -> 2.14411,
{"Temperature.Range", "Country", "UK"} -> 1.25806,
{"Temperature.Range", "Country", "Australia"} -> 2.73032,
{"WindSpeed.Range", "Country", "USA"} -> 4.13532,
{"WindSpeed.Range", "Country", "UK"} -> 3.62097,
{"WindSpeed.Range", "Country", "Australia"} -> 3.17226|> *)``````

Making contingency matrices

One of the main goals of `ERTMon` is to produce contingency matrices corresponding to the event records data.

The contingency matrices are created and stored as `SSparseMatrix` objects, [AAp7].

``````p =
p⟹ERTMonMakeContingencyMatrices;``````

We can obtain an association of the contingency matrices for each variable-and-aggregation-function pair, or obtain the overall contingency matrix.

``````p⟹ERTMonTakeContingencyMatrices
Dimensions /@ %``````
``````smat = p⟹ERTMonTakeContingencyMatrix;
MatrixPlot[smat, ImageSize -> 700]``````
``````RowNames[smat]

(* {"EGLC", "KGEU", "KMDW", "KMFL", "KNIP", "WMO95866"} *)``````

Larger example pipeline

The pipeline shown in this section utilizes all main workflow functions of `ERTMon`. The used weather data and computation specification are described above.

References

Packages

[AAp7] Anton Antonov, SSparseMatrix Mathematica package, (2018), MathematicaForPrediction at GitHub*. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/SSparseMatrix.m .

Documents

[AA1a] Anton Antonov, Monad code generation and extension, (2017), MathematicaForPrediction at GitHub.

QRMon for some credit risk article

Introduction

In this notebook/document we apply the monad `QRMon` [3] over data of the article [1]. In order to get the data we use extraction procedure described in [2].

(I saw the article [1] while browsing LinkedIn today. I met one of the authors during the event "Data Science Salon Miami Feb 2018".)

Extract data

I extracted the data from the image using "Recovering data points from an image".

``img = Import["https://www.spglobal.com/_assets/images/marketintelligence/blog-images/demonstration-of-model-fit-comparison-visualization.png"]``
``````extractedData
(* {{-1., 0.284894}, {-0.987395, 0.340483}, {-0.966387,
0.215408}, {-0.941176, 0.416918}, {-0.894958, 0.222356}, {-0.890756,
0.215408}, {-0.878151, 0.0903323}, {-0.848739,
0.132024}, {-0.844538, 0.10423}, {-0.831933, 0.333535}, {-0.819328,
0.180665}, {-0.781513, 0.423867}, {-0.756303, 0.40997}, {-0.752101,
0.528097}, {-0.747899, 0.416918}, {-0.731092, 0.375227}, {-0.714286,
0.194562}, {-0.710084, 0.340483}, {-0.651261,
0.555891}, {-0.647059, 0.333535}, {-0.605042, 0.496828}, {-0.57563,
0.}, {-0.512605, 0.354381}, {-0.491597, 0.368278}, {-0.487395,
0.472508}, {-0.478992, 0.479456}, {-0.453782, 0.437764}, {-0.357143,
0.15287}, {-0.344538, 0.340483}, {-0.331933, 0.333535}, {-0.315126,
0.500302}, {-0.285714, 0.396073}, {-0.247899,
0.618429}, {-0.201681, 0.541994}, {-0.159664, 0.680967}, {-0.10084,
1.06314}, {-0.0966387, 0.993656}, {0., 1.36193}, {0.0210084,
1.44532}, {0.0420168, 1.5148}, {0.0504202, 1.5148}, {0.0882353,
1.41405}, {0.130252, 1.70937}, {0.172269, 2.029}, {0.176471,
1.7858}, {0.222689, 2.20272}, {0.226891, 2.23746}, {0.231092,
2.23746}, {0.239496, 1.96647}, {0.268908, 1.94562}, {0.273109,
1.91088}, {0.277311, 1.91088}, {0.281513, 1.94562}, {0.294118,
2.2861}, {0.319328, 2.26526}, {0.327731, 2.3}, {0.432773,
1.68157}, {0.462185, 1.86918}, {0.5, 2.00121}} *)

ListPlot[extractedData, PlotRange -> All, PlotTheme -> "Detailed"]``````

Apply `QRMon`

Load packages. (For more details see [4].)

``````Import["https://raw.githubusercontent.com/antononcube/\
Import["https://raw.githubusercontent.com/antononcube/\

Apply the QRMon workflow within the TraceMonad:

``````TraceMonadUnit[QRMonUnit[extractedData]]⟹"lift data to the monad"⟹
QRMonEchoDataSummary⟹"echo data summary"⟹
QRMonQuantileRegression[12, 0.5]⟹"do Quantile Regression with\nB-spline basis with 12 knots"⟹
QRMonPlot⟹"plot the data and regression curve"⟹
QRMonEcho[Style["Tabulate QRMon steps and explanations:", Purple, Bold]]⟹"echo an explanation message"⟹

References

[1] Moody Hadi and Danny Haydon, "A Perspective On Machine Learning In Credit Risk", (2018), S&P Global Market Intelligence.

[2] Andy Ross, answer of "Recovering data points from an image", (2012).

[3] Anton Antonov, "A monad for Quantile Regression workflows", (2018), MathematicaForPrediction at WordPress.

A monad for Quantile Regression workflows

Introduction

In this document we describe the design and implementation of a (software programming) monad for Quantile Regression workflows specification and execution. The design and implementation are done with Mathematica / Wolfram Language (WL).

What is Quantile Regression? : Assume we have a set of two dimensional points each point being a pair of an independent variable value and a dependent variable value. We want to find a curve that is a function of the independent variable that splits the points in such a way that, say, 30% of the points are above that curve. This is done with Quantile Regression, see [Wk2, CN1, AA2, AA3]. Quantile Regression is a method to estimate the variable relations for all parts of the distribution. (Not just, say, the mean of the relationships found with Least Squares Regression.)

The goal of the monad design is to make the specification of Quantile Regression workflows (relatively) easy, straightforward, by following a certain main scenario and specifying variations over that scenario. Since Quantile Regression is often compared with Least Squares Regression and some type of filtering (like, Moving Average) those functionalities should be included in the monad design scenarios.

The monad is named QRMon and it is based on the State monad package "StateMonadCodeGenerator.m", [AAp1, AA1] and the Quantile Regression package "QuantileRegression.m", [AAp4, AA2].

The data for this document is read from WL’s repository or created ad-hoc.

The monadic programming design is used as a Software Design Pattern. The QRMon monad can be also seen as a Domain Specific Language (DSL) for the specification and programming of machine learning classification workflows.

Here is an example of using the QRMon monad over heteroscedastic data::

The table above is produced with the package "MonadicTracing.m", [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad QRMon can be seen as a DSL. Because of this the monad pipelines made with QRMon are sometimes called "specifications".

Remark: With "regression quantile" we mean "a curve or function that is computed with Quantile Regression".

Contents description

The document has the following structure.

• The sections "Package load" and "Data load" obtain the needed code and data.
• The sections "Design consideration" and "Monad design" provide motivation and design decisions rationale.

• The sections "QRMon overview" and "Monad elements" provide technical description of the QRMon monad needed to utilize it.

• (Using a fair amount of examples.)
• The section "Unit tests" describes the tests used in the development of the QRMon monad.
• (The random pipelines unit tests are especially interesting.)
• The section "Future plans" outlines future directions of development.
• The section "Implementation notes" just says that QRMon’s development process and this document follow the ones of the classifications workflows monad `ClCon`, [AA6].

Remark: One can read only the sections "Introduction", "Design consideration", "Monad design", and "QRMon overview". That set of sections provide a fairly good, programming language agnostic exposition of the substance and novel ideas of this document.

The table above is produced with the package "MonadicTracing.m", [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad QRMon can be seen as a DSL. Because of this the monad pipelines made with QRMon are sometimes called "specifications".

Remark: With "regression quantile" we mean "a curve or function that is computed with Quantile Regression".

The following commands load the packages [AAp1–AAp6]:

``````Import["https://raw.githubusercontent.com/antononcube/\
Import["https://raw.githubusercontent.com/antononcube/\

In this section we load data that is used in the rest of the document. The time series data is obtained through WL’s repository.

The data summarization and plots are done through QRMon, which in turn uses the function RecordsSummary from the package "MathematicaForPredictionUtilities.m", [AAp6].

Distribution data

The following data is generated to have [heteroscedasticity(https://en.wikipedia.org/wiki/Heteroscedasticity).

``````distData =
Table[{x,
Exp[-x^2] +
RandomVariate[
NormalDistribution[0, .15 Sqrt[Abs[1.5 - x]/1.5]]]}, {x, -3,
3, .01}];
Length[distData]

(* 601 *)

QRMonUnit[distData]⟹QRMonEchoDataSummary⟹QRMonPlot;``````

Temperature time series

``````tsData = WeatherData[{"Orlando", "USA"}, "Temperature", {{2015, 1, 1}, {2018, 1, 1}, "Day"}]

QRMonUnit[tsData]⟹QRMonEchoDataSummary⟹QRMonDateListPlot;``````

Financial time series

The following data is typical for financial time series. (Note the differences with the temperature time series.)

``````finData = TimeSeries[FinancialData["NYSE:GE", {{2014, 1, 1}, {2018, 1, 1}, "Day"}]];

QRMonUnit[finData]⟹QRMonEchoDataSummary⟹QRMonDateListPlot;``````

Design considerations

The steps of the main regression workflow addressed in this document follow.

1. Retrieving data from a data repository.

2. Optionally, transform the data.

1. Delete rows with missing fields.

2. Rescale data along one or both of the axes.

3. Apply moving average (or median, or map.)

3. Verify assumptions of the data.

4. Run a regression algorithm with a certain basis of functions using:

1. Quantile Regression, or

2. Least Squares Regression.

5. Visualize the data and regression functions.

6. If the regression functions fit is not satisfactory go to step 4.

7. Utilize the found regression functions to compute:

1. outliers,

2. local extrema,

3. approximation or fitting errors,

4. conditional density distributions,

5. time series simulations.

The following flow-chart corresponds to the list of steps above.

• the introduction of new elements in regression workflows,

• workflows elements variability, and

• workflows iterative changes and refining,

it is beneficial to have a DSL for regression workflows. We choose to make such a DSL through a functional programming monad, [Wk1, AA1].

Here is a quote from [Wk1] that fairly well describes why we choose to make a classification workflow monad and hints on the desired properties of such a monad.

[…] The monad represents computations with a sequential structure: a monad defines what it means to chain operations together. This enables the programmer to build pipelines that process data in a series of steps (i.e. a series of actions applied to the data), in which each action is decorated with the additional processing rules provided by the monad. […] Monads allow a programming style where programs are written by putting together highly composable parts, combining in flexible ways the possible actions that can work on a particular type of data. […]

Remark: Note that quote from [Wk1] refers to chained monadic operations as "pipelines". We use the terms "monad pipeline" and "pipeline" below.

The monad we consider is designed to speed-up the programming of quantile regression workflows outlined in the previous section. The monad is named QRMon for "Quantile Regression Monad".

We want to be able to construct monad pipelines of the general form:

QRMon is based on the State monad, [Wk1, AA1], so the monad pipeline form (1) has the following more specific form:

This means that some monad operations will not just change the pipeline value but they will also change the pipeline context.

In the monad pipelines of QRMon we store different objects in the contexts for at least one of the following two reasons.

1. The object will be needed later on in the pipeline, or

2. The object is (relatively) hard to compute.

Such objects are transformed data, regression functions, and outliers.

Let us list the desired properties of the monad.

• Rapid specification of non-trivial quantile regression workflows.

• The monad works with time series, numerical matrices, and numerical vectors.

• The pipeline values can be of different types. Most monad functions modify the pipeline value; some modify the context; some just echo results.

• The monad can do quantile regression with B-Splines bases, quantile regression fit and least squares fit with specified bases of functions.

• The monad allows of cursory examination and summarization of the data.

• It is easy to obtain the pipeline value, context, and different context objects for manipulation outside of the monad.

• It is easy to plot different combinations of data, regression functions, outliers, approximation errors, etc.

The QRMon components and their interactions are fairly simple.

The main QRMon operations implicitly put in the context or utilize from the context the following objects:

• (time series) data,

• regression functions,

• outliers and outlier regression functions.

Note the that the monadic set of types of QRMon pipeline values is fairly heterogenous and certain awareness of "the current pipeline value" is assumed when composing QRMon pipelines.

Obviously, we can put in the context any object through the generic operations of the State monad of the package "StateMonadGenerator.m", [AAp1].

QRMon overview

When using a monad we lift certain data into the "monad space", using monad’s operations we navigate computations in that space, and at some point we take results from it.

With the approach taken in this document the "lifting" into the QRMon monad is done with the function QRMonUnit. Results from the monad can be obtained with the functions QRMonTakeValue, QRMonContext, or with the other QRMon functions with the prefix "QRMonTake" (see below.)

Here is a corresponding diagram of a generic computation with the QRMon monad:

Remark: It is a good idea to compare the diagram with formulas (1) and (2).

Let us examine a concrete QRMon pipeline that corresponds to the diagram above. In the following table each pipeline operation is combined together with a short explanation and the context keys after its execution.

Here is the output of the pipeline:

The QRMon functions are separated into four groups:

• operations,

• setters and droppers,

• takers,

An overview of the those functions is given in the tables in next two sub-sections. The next section, "Monad elements", gives details and examples for the usage of the QRMon operations.

Monad functions interaction with the pipeline value and context

The following table gives an overview the interaction of the QRMon monad functions with the pipeline value and context.

The following table shows the functions that are function synonyms or short-cuts.

Here are the QRMon State Monad functions (generated using the prefix "QRMon", [AAp1, AA1]):

In this section we show that QRMon has all of the properties listed in the previous section.

The monad head is QRMon. Anything wrapped in QRMon can serve as monad’s pipeline value. It is better though to use the constructor QRMonUnit. (Which adheres to the definition in [Wk1].)

``QRMon[{{1, 223}, {2, 323}}, <||>]⟹QRMonEchoDataSummary;``

The function lifting the data into the monad QRMon is QRMonUnit.

The lifting to the monad marks the beginning of the monadic pipeline. It can be done with data or without data. Examples follow.

``QRMonUnit[distData]⟹QRMonEchoDataSummary;``
``QRMonUnit[]⟹QRMonSetData[distData]⟹QRMonEchoDataSummary;``

(See the sub-section "Setters, droppers, and takers" for more details of setting and taking values in QRMon contexts.)

Currently the monad can deal with data in the following forms:

• time series,

• numerical vectors,

• numerical matrices of rank two.

When the data lifted to the monad is a numerical vector vec it is assumed that vec has to become the second column of a "time series" matrix; the first column is derived with Range[Length[vec]] .

Generally, WL makes it easy to extract columns datasets order to obtain numerical matrices, so datasets are not currently supported in QRMon.

Quantile regression with B-splines

This computes quantile regression with B-spline basis over 12 regularly spaced knots. (Using Linear Programming algorithms; see [AA2] for details.)

``````QRMonUnit[distData]⟹
QRMonQuantileRegression[12]⟹
QRMonPlot;``````

The monad function QRMonQuantileRegression has the same options as QuantileRegression. (The default value for option Method is different, since using "CLP" is generally faster.)

``````Options[QRMonQuantileRegression]

(* {InterpolationOrder -> 3, Method -> {LinearProgramming, Method -> "CLP"}} *)``````

Let us compute regression using a list of particular knots, specified quantiles, and the method "InteriorPoint" (instead of the Linear Programming library CLP):

``````p =
QRMonUnit[distData]⟹
QRMonQuantileRegression[{-3, -2, 1, 0, 1, 1.5, 2.5, 3}, Range[0.1, 0.9, 0.2], Method -> {LinearProgramming, Method -> "InteriorPoint"}]⟹
QRMonPlot;``````

Remark: As it was mentioned above the function QRMonRegression is a synonym of QRMonQuantileRegression.

The fit functions can be extracted from the monad with QRMonTakeRegressionFunctions, which gives an association of quantiles and pure functions.

``ListPlot[# /@ distData[[All, 1]]] & /@ (p⟹QRMonTakeRegressionFunctions)``

Quantile regression fit and Least squares fit

Instead of using a B-spline basis of functions we can compute a fit with our own basis of functions.

Here is a basis functions:

``````bFuncs = Table[PDF[NormalDistribution[m, 1], x], {m, Min[distData[[All, 1]]], Max[distData[[All, 1]]], 1}];
Plot[bFuncs, {x, Min[distData[[All, 1]]], Max[distData[[All, 1]]]},
PlotRange -> All, PlotTheme -> "Scientific"]``````

Here we do a Quantile Regression fit, a Least Squares fit, and plot the results:

``````p =
QRMonUnit[distData]⟹
QRMonQuantileRegressionFit[bFuncs]⟹
QRMonLeastSquaresFit[bFuncs]⟹
QRMonPlot;
``````

Remark: The functions "QRMon*Fit" should generally have a second argument for the symbol of the basis functions independent variable. Often that symbol can be omitted and implied. (Which can be seen in the pipeline above.)

Remark: As it was mentioned above the function QRMonRegressionFit is a synonym of QRMonQuantileRegressionFit and QRMonFit is a synonym of QRMonLeastSquaresFit.

As it was pointed out in the previous sub-section, the fit functions can be extracted from the monad with QRMonTakeRegressionFunctions. Here the keys of the returned/taken association consist of quantiles and "mean" since we applied both Quantile Regression and Least Squares Regression.

``ListPlot[# /@ distData[[All, 1]]] & /@ (p⟹QRMonTakeRegressionFunctions)``

Default basis to fit (using Chebyshev polynomials)

One of the main advantages of using the function QuanileRegression of the package [AAp4] is that the functions used to do the regression with are specified with a few numeric parameters. (Most often only the number of knots is sufficient.) This is achieved by using a basis of B-spline functions of a certain interpolation order.

We want similar behaviour for Quantile Regression fitting we need to select a certain well known basis with certain desirable properties. Such basis is given by Chebyshev polynomials of first kind [Wk3]. Chebyshev polynomials bases can be easily generated in Mathematica with the functions ChebyshevT or ChebyshevU.

Here is an application of fitting with a basis of 12 Chebyshev polynomials of first kind:

``````QRMonUnit[distData]⟹
QRMonQuantileRegressionFit[12]⟹
QRMonLeastSquaresFit[12]⟹
QRMonPlot;``````

The code above is equivalent to the following code:

``````bfuncs = Table[ChebyshevT[i, Rescale[x, MinMax[distData[[All, 1]]], {-0.95, 0.95}]], {i, 0, 12}];

p =
QRMonUnit[distData]⟹
QRMonQuantileRegressionFit[bfuncs]⟹
QRMonLeastSquaresFit[bfuncs]⟹
QRMonPlot;``````

The shrinking of the ChebyshevT domain seen in the definitions of bfuncs is done in order to prevent approximation error effects at the ends of the data domain. The following code uses the ChebyshevT domain { − 1, 1} instead of the domain { − 0.95, 0.95} used above.

``````QRMonUnit[distData]⟹
QRMonQuantileRegressionFit[{4, {-1, 1}}]⟹
QRMonPlot;``````

Regression functions evaluation

The computed quantile and least squares regression functions can be evaluated with the monad function QRMonEvaluate.

Evaluation for a given value of the independent variable:

``````p⟹QRMonEvaluate[0.12]⟹QRMonTakeValue

(* <|0.25 -> 0.930402, 0.5 -> 1.01411, 0.75 -> 1.08075, "mean" -> 0.996963|> *)``````

Evaluation for a vector of values:

``````p⟹QRMonEvaluate[Range[-1, 1, 0.5]]⟹QRMonTakeValue

(* <|0.25 -> {0.258241, 0.677461, 0.943299, 0.703812, 0.293741},
0.5 -> {0.350025, 0.768617, 1.02311, 0.807879, 0.374545},
0.75 -> {0.499338, 0.912183, 1.10325, 0.856729, 0.431227},
"mean" -> {0.355353, 0.776006, 1.01118, 0.783304, 0.363172}|> *)``````

Evaluation for complicated lists of numbers:

``````p⟹QRMonEvaluate[{0, 1, {1.5, 1.4}}]⟹QRMonTakeValue

(* <|0.25 -> {0.943299, 0.293741, {0.0762883, 0.10759}},
0.5 -> {1.02311, 0.374545, {0.103386, 0.139142}},
0.75 -> {1.10325, 0.431227, {0.133755, 0.177161}},
"mean" -> {1.01118, 0.363172, {0.107989, 0.142021}}|> *)
``````

The obtained values can be used to compute estimates of the distributions of the dependent variable. See the sub-sections "Estimating conditional distributions" and "Dependent variable simulation".

Errors and error plots

Here with "errors" we mean the differences between data’s dependent variable values and the corresponding values calculated with the fitted regression curves.

In the pipeline below we compute couple of regression quantiles, plot them together with the data, we plot the errors, compute the errors, and summarize them.

``````QRMonUnit[finData]⟹
QRMonQuantileRegression[10, {0.5, 0.1}]⟹
QRMonDateListPlot[Joined -> False]⟹
QRMonErrorPlots["DateListPlot" -> True, Joined -> False]⟹
QRMonErrors⟹
QRMonEchoFunctionValue["Errors summary:", RecordsSummary[#[[All, 2]]] & /@ # &];``````

Each of the functions QRMonErrors and QRMonErrorPlots computes the errors. (That computation is considered cheap.)

Finding outliers

Finding outliers can be done with the function QRMonOultiers. The outliers found by QRMonOutliers are simply points that below or above certain regression quantile curves, for example, the ones corresponding to 0.02 and 0.98.

Here is an example:

``````p =
QRMonUnit[distData]⟹
QRMonQuantileRegression[6, {0.02, 0.98}]⟹
QRMonOutliers⟹
QRMonEchoValue⟹
QRMonOutliersPlot;``````

The function QRMonOutliers puts in the context values for the keys "outliers" and "outlierRegressionFunctions". The former is for the found outliers, the latter is for the functions corresponding to the used regression quantiles.

``````Keys[p⟹QRMonTakeContext]

(* {"data", "regressionFunctions", "outliers", "outlierRegressionFunctions"} *)``````

Here are the corresponding quantiles of the plot above:

``````Keys[p⟹QRMonTakeOutlierRegressionFunctions]

(* {0.02, 0.98} *)``````

The control of the outliers computation is done though the arguments and options of QRMonQuantileRegression (or the rest of the regression calculation functions.)

If only one regression quantile is found in the context and the corresponding quantile is less than 0.5 then QRMonOutliers finds only bottom outliers. If only one regression quantile is found in the context and the corresponding quantile is greater than 0.5 then QRMonOutliers finds only top outliers.

Here is an example for finding only the top outliers:

``````QRMonUnit[finData]⟹
QRMonQuantileRegression[5, 0.8]⟹
QRMonOutliers⟹
QRMonEchoFunctionContext["outlier quantiles:", Keys[#outlierRegressionFunctions] &]⟹
QRMonOutliersPlot["DateListPlot" -> True];
``````

Plotting outliers

The function QRMonOutliersPlot makes an outliers plot. If the outliers are not in the context then QRMonOutliersPlot calls QRMonOutliers first.

Here are the options of QRMonOutliersPlot:

``````Options[QRMonOutliersPlot]

(* {"Echo" -> True, "DateListPlot" -> False, ListPlot -> {Joined -> False}, Plot -> {}} *)``````

The default behavior is to echo the plot. That can be suppressed with the option "Echo".

QRMonOutliersPlot utilizes combines with Show two plots:

• one with ListPlot (or DateListPlot) for the data and the outliers,

• the other with Plot for the regression quantiles used to find the outliers.

That is why separate lists of options can be given to manipulate those two plots. The option DateListPlot can be used make plots with date or time axes.

``````QRMonUnit[tsData]⟹
QRMonQuantileRegression[12, {0.01, 0.99}]⟹
QRMonOutliersPlot[
"Echo" -> False,
"DateListPlot" -> True,
ListPlot -> {PlotStyle -> {Green, {PointSize[0.02],
Red}, {PointSize[0.02], Blue}}, Joined -> False,
PlotTheme -> "Grid"},
Plot -> {PlotStyle -> Orange}]⟹
QRMonTakeValue
``````

Estimating conditional distributions

Consider the following problem:

How to estimate the conditional density of the dependent variable given a value of the conditioning independent variable?

(In other words, find the distribution of the y-values for a given, fixed x-value.)

The solution of this problem using Quantile Regression is discussed in detail in [PG1] and [AA4].

Finding a solution for this problem can be seen as a primary motivation to develop Quantile Regression algorithms.

The following pipeline (i) computes and plots a set of five regression quantiles and (ii) then using the found regression quantiles computes and plots the conditional distributions for two focus points (−2 and 1.)

``````QRMonUnit[distData]⟹
QRMonQuantileRegression[6,
Range[0.1, 0.9, 0.2]]⟹
QRMonPlot[GridLines -> {{-2, 1}, None}]⟹
QRMonConditionalCDF[{-2, 1}]⟹
QRMonConditionalCDFPlot;``````

Moving average, moving median, and moving map

Fairly often it is a good idea for a given time series to apply filter functions like Moving Average or Moving Median. We might want to:

• visualize the obtained transformed data,

• do regression over the transformed data,

• compare with regression curves over the original data.

For these reasons QRMon has the functions QRMonMovingAverage, QRMonMovingMedian, and QRMonMovingMap that correspond to the built-in functions MovingAverage, MovingMedian, and MovingMap.

Here is an example:

``````QRMonUnit[tsData]⟹
QRMonDateListPlot[ImageSize -> Small]⟹
QRMonMovingAverage[20]⟹
QRMonEchoFunctionValue["Moving avg: ", DateListPlot[#, ImageSize -> Small] &]⟹
QRMonMovingMap[Mean, Quantity[20, "Days"]]⟹
QRMonEchoFunctionValue["Moving map: ", DateListPlot[#, ImageSize -> Small] &];``````

Dependent variable simulation

Consider the problem of making a time series that is a simulation of a process given with a known time series.

More formally,

• we are given a time-axis grid (regular or irregular),

• we consider each grid node to correspond to a random variable,

• we want to generate time series based on the empirical CDF’s of the random variables that correspond to the grid nodes.

The formulation of the problem hints to an (almost) straightforward implementation using Quantile Regression.

``````p = QRMonUnit[tsData]⟹QRMonQuantileRegression[30, Join[{0.01}, Range[0.1, 0.9, 0.1], {0.99}]];

tsNew =
p⟹
QRMonSimulate[1000]⟹
QRMonTakeValue;

opts = {ImageSize -> Medium, PlotTheme -> "Detailed"};
GraphicsGrid[{{DateListPlot[tsData, PlotLabel -> "Actual", opts],
DateListPlot[tsNew, PlotLabel -> "Simulated", opts]}}]``````

Finding local extrema in noisy data

Using regression fitting — and Quantile Regression in particular — we can easily construct semi-symbolic algorithms for finding local extrema in noisy time series data; see [AA5]. The QRMon function with such an algorithm is QRMonLocalExtrema.

In brief, the algorithm steps are as follows. (For more details see [AA5].)

1. Fit a polynomial through the data.

2. Find the local extrema of the fitted polynomial. (We will call them fit estimated extrema.)

3. Around each of the fit estimated extrema find the most extreme point in the data by a nearest neighbors search (by using Nearest).

The function QRMonLocalExtrema uses the regression quantiles previously found in the monad pipeline (and stored in the context.) The bottom regression quantile is used for finding local minima, the top regression quantile is used for finding the local maxima.

An example of finding local extrema follows.

``````QRMonUnit[TimeSeriesWindow[tsData, {{2015, 1, 1}, {2018, 12, 31}}]]⟹
QRMonQuantileRegression[10, {0.05, 0.95}]⟹
QRMonDateListPlot[Joined -> False, PlotTheme -> "Scientific"]⟹
QRMonLocalExtrema["NumberOfProximityPoints" -> 100]⟹
QRMonEchoValue⟹
QRMonEchoFunctionContext[
DateListPlot[{#localMinima, #localMaxima, #data},
PlotStyle -> {PointSize[0.015], PointSize[0.015], Gray},
Joined -> False,
PlotLegends -> {"localMinima", "localMaxima", "data"},
PlotTheme -> "Scientific"] &];``````

Note that in the pipeline above in order to plot the data and local extrema together some additional steps are needed. The result of QRMonLocalExtrema becomes the pipeline value; that pipeline value is displayed with QRMonEchoValue, and stored in the context with QRMonAddToContext. If the pipeline value is an association — which is the case here — the monad function QRMonAddToContext joins that association with the context association. In this case this means that we will have key-value elements in the context for "localMinima" and "localMaxima". The date list plot at the end of the pipeline uses values of those context keys (together with the value for "data".)

Setters, droppers, and takers

The values from the monad context can be set, obtained, or dropped with the corresponding "setter", "dropper", and "taker" functions as summarized in a previous section.

For example:

``````p = QRMonUnit[distData]⟹QRMonQuantileRegressionFit[2];

p⟹QRMonTakeRegressionFunctions

(* <|0.25 -> (0.0191185 + 0.00669159 #1 + 3.05509*10^-14 #1^2 &),
0.5 -> (0.191408 + 9.4728*10^-14 #1 + 3.02272*10^-14 #1^2 &),
0.75 -> (0.563422 + 3.8079*10^-11 #1 + 7.63637*10^-14 #1^2 &)|> *)
``````

If other values are put in the context they can be obtained through the (generic) function QRMonTakeContext, [AAp1]:

``````p = QRMonUnit[RandomReal[1, {2, 2}]]⟹QRMonAddToContext["data"];

(p⟹QRMonTakeContext)["data"]

(* {{0.608789, 0.741599}, {0.877074, 0.861554}} *)``````

Another generic function from [AAp1] is QRMonTakeValue (used many times above.)

Here is an example of the "data dropper" QRMonDropData:

``````p⟹QRMonDropData⟹QRMonTakeContext

(* <||> *)``````

(The "droppers" simply use the state monad function QRMonDropFromContext, [AAp1]. For example, QRMonDropData is equivalent to QRMonDropFromContext["data"].)

Unit tests

The development of QRMon was done with two types of unit tests: (i) directly specified tests, [AAp7], and (ii) tests based on randomly generated pipelines, [AA8].

The unit test package should be further extended in order to provide better coverage of the functionalities and illustrate — and postulate — pipeline behavior.

Directly specified tests

Here we run the unit tests file "MonadicQuantileRegression-Unit-Tests.wlt", [AAp7]:

``````AbsoluteTiming[
]``````

The natural language derived test ID’s should give a fairly good idea of the functionalities covered in [AAp3].

``````Values[Map[#["TestID"] &, testObject["TestResults"]]]

"QuantileRegression-2", "QuantileRegression-3", \
"QuantileRegression-and-Fit-1", "Fit-and-QuantileRegression-1", \
"QuantileRegressionFit-and-Fit-1", "Fit-and-QuantileRegressionFit-1", \
"Outliers-1", "Outliers-2", "GridSequence-1", "BandsSequence-1", \
"ConditionalCDF-1", "Evaluate-1", "Evaluate-2", "Evaluate-3", \
"Simulate-1", "Simulate-2", "Simulate-3"} *)``````

Random pipelines tests

Since the monad QRMon is a DSL it is natural to test it with a large number of randomly generated "sentences" of that DSL. For the QRMon DSL the sentences are QRMon pipelines. The package "MonadicQuantileRegressionRandomPipelinesUnitTests.m", [AAp8], has functions for generation of QRMon random pipelines and running them as verification tests. A short example follows.

Generate pipelines:

``````SeedRandom[234]
pipelines = MakeQRMonRandomPipelines[100];
Length[pipelines]

(* 100 *)``````

Here is a sample of the generated pipelines:

``````(*
Block[{DoubleLongRightArrow, pipelines = RandomSample[pipelines, 6]},
Clear[DoubleLongRightArrow];
pipelines = pipelines /. {_TemporalData -> "tsData", _?MatrixQ -> "distData"};
GridTableForm[Map[List@ToString[DoubleLongRightArrow @@ #, FormatType -> StandardForm] &, pipelines], TableHeadings -> {"pipeline"}]
]
AutoCollapse[] *)``````

Here we run the pipelines as unit tests:

``````AbsoluteTiming[
res = TestRunQRMonPipelines[pipelines, "Echo" -> False];
]``````

From the test report results we see that a dozen tests failed with messages, all of the rest passed.

``rpTRObj = TestReport[res]``

(The message failures, of course, have to be examined — some bugs were found in that way. Currently the actual test messages are expected.)

Future plans

Workflow operations

A list of possible, additional workflow operations and improvements follows.

• Certain improvements can be done over the specification of the different plot options.

• It will be useful to develop a function for automatic finding of over-fitting parameters.

• The time series simulation should be done by aggregation of similar time intervals.

• For example, for time series with span several years, for each month name is made Quantile Regression simulation and the results are spliced to obtain a one year simulation.
• If the time series is represented as a sequence of categorical values, then the time series simulation can use Bayesian probabilities derived from sub-sequences.
• QRMon already has functions that facilitate that, QRMonGridSequence and QRMonBandsSequence.

Conversational agent

Using the packages [AAp10, AAp11] we can generate QRMon pipelines with natural commands. The plan is to develop and document those functionalities further.

Here is an example of a pipeline constructed with natural language commands:

``````QRMonUnit[distData]⟹
ToQRMonPipelineFunction["show data summary"]⟹
ToQRMonPipelineFunction["calculate quantile regression for quantiles 0.2, 0.8 and with 40 knots"]⟹
ToQRMonPipelineFunction["plot"];``````

Implementation notes

The implementation methodology of the QRMon monad packages [AAp3, AAp8] followed the methodology created for the ClCon monad package [AAp9, AA6]. Similarly, this document closely follows the structure and exposition of the ClCon monad document "A monad for classification workflows", [AA6].

A lot of the functionalities and signatures of QRMon were designed and programed through considerations of natural language commands specifications given to a specialized conversational agent. (As discussed in the previous section.)

References

ConversationalAgents Packages

[AAp10] Anton Antonov, Time series workflows grammar in EBNF, (2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

[AAp11] Anton Antonov, QRMon translator Mathematica package,(2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.