Generation of Random Bethlehem Stars

Introduction

This document/notebook is inspired by the Mathematica Stack Exchange (MSE) question “Plotting the Star of Bethlehem”, [MSE1]. That MSE question requests efficient and fast plotting of a certain mathematical function that (maybe) looks like the Star of Bethlehem, [Wk1]. Instead of doing what the author of the questions suggests, I decided to use a generative art program and workflows from three of most important Machine Learning (ML) sub-cultures: Latent Semantic Analysis, Recommendations, and Classification.

Although we discuss making of Bethlehem Star-like images, the ML workflows and corresponding code presented in this document/notebook have general applicability – in many situations we have to make classifiers based on data that has to be “feature engineered” through pipeline of several types of ML transformative workflows and that feature engineering requires multiple iterations of re-examinations and tuning in order to achieve the set goals.

The document/notebook is structured as follows:

  1. Target Bethlehem Star images
  2. Simplistic approach
  3. Elaborated approach outline
  4. Sections that follow through elaborated approach outline:
    1. Data generation
    2. Feature extraction
    3. Recommender creation
    4. Classifier creation and utilization experiments

(This document/notebook is a “raw” chapter for the book “Simplified Machine Learning Workflows”, [AAr3].)

Target images

Here are the images taken from [MSE1] that we consider to be “Bethlehem Stars” in this document/notebook:

imgStar1 = Import["https://i.stack.imgur.com/qmmOw.png"];
imgStar2 = Import["https://i.stack.imgur.com/5gtsS.png"];
Row[{imgStar1, Spacer[5], imgStar2}]
00dxgln7hhmjl

We notice that similar images can be obtained using the Wolfram Function Repository (WFR) function RandomMandala, [AAr1]. Here are a dozen examples:

SeedRandom[5];
Multicolumn[Table[MandalaToWhiterImage@ResourceFunction["RandomMandala"]["RotationalSymmetryOrder" -> 2, "NumberOfSeedElements" -> RandomInteger[{2, 8}], "ConnectingFunction" -> FilledCurve@*BezierCurve], 12], 6, Background -> Black]
0dwkbztss087q

Simplistic approach

We can just generate a large enough set of mandalas and pick the ones we like.

More precisely we have the following steps:

  1. We generate, say, 200 random mandalas using BlockRandom and keeping track of the random seeds
    1. The mandalas are generated with rotational symmetry order 2 and filled Bezier curve connections.
  2. We pick mandalas that look, more or less, like Bethlehem Stars
  3. Add picked mandalas to the results list
  4. If too few mandalas are in the results list go to 1.

Here are some mandalas generated with those steps:

lsStarReferenceSeeds = DeleteDuplicates@{697734, 227488491, 296515155601, 328716690761, 25979673846, 48784395076, 61082107304, 63772596796, 128581744446, 194807926867, 254647184786, 271909611066, 296515155601, 575775702222, 595562118302, 663386458123, 664847685618, 680328164429, 859482663706};
Multicolumn[
  Table[BlockRandom[ResourceFunction["RandomMandala"]["RotationalSymmetryOrder" -> 2, "NumberOfSeedElements" -> Automatic, "ConnectingFunction" -> FilledCurve@*BezierCurve, ColorFunction -> (White &), Background -> Black], RandomSeeding -> rs], {rs, lsStarReferenceSeeds}] /. GrayLevel[0.25`] -> White, 6, Appearance -> "Horizontal", Background -> Black]
1aedatd1zb3fh

Remark: The plot above looks prettier in notebook converted with the resource function DarkMode.

Elaborated approach

Assume that we want to automate the simplistic approach described in the previous section.

One way to automate is to create a Machine Learning (ML) classifier that is capable of discerning which RandomMandala objects look like Bethlehem Star target images and which do not. With such a classifier we can write a function BethlehemMandala that applies the classifier on multiple results from RandomMandala and returns those mandalas that the classifier says are good.

Here are the steps of building the proposed classifier:

  • Generate a large enough Random Mandala Images Set (RMIS)
  • Create a feature extractor from a subset of RMIS
  • Assign features to all of RMIS
  • Make a recommender with the RMIS features and other image data (like pixel values)
  • Apply the RMIS recommender over the target Bethlehem Star images and determine and examine image sets that are:
    • the best recommendations
    • the worse recommendations
  • With the best and worse recommendations sets compose training data for classifier making
  • Train a classifier
  • Examine classifier application to (filtering of) random mandala images (both in RMIS and not in RMIS)
  • If the results are not satisfactory redo some or all of the steps above

Remark: If the results are not satisfactory we should consider using the obtained classifier at the data generation phase. (This is not done in this document/notebook.)

Remark: The elaborated approach outline and flow chart have general applicability, not just for generation of random images of a certain type.

Flow chart

Here is a flow chart that corresponds to the outline above:

09agsmbmjhhv4

A few observations for the flow chart follow:

  • The flow chart has a feature extraction block that shows that the feature extraction can be done in several ways.
    • The application of LSA is a type of feature extraction which this document/notebook uses.
  • If the results are not good enough the flow chart shows that the classifier can be used at the data generation phase.
  • If the results are not good enough there are several alternatives to redo or tune the ML algorithms.
    • Changing or tuning the recommender implies training a new classifier.
    • Changing or tuning the feature extraction implies making a new recommender and a new classifier.

Data generation and preparation

In this section we generate random mandala graphics, transform them into images and corresponding vectors. Those image-vectors can be used to apply dimension reduction algorithms. (Other feature extraction algorithms can be applied over the images.)

Generated data

Generate large number of mandalas:

k = 20000;
knownSeedsQ = False;
SeedRandom[343];
lsRSeeds = Union@RandomInteger[{1, 10^9}, k];
AbsoluteTiming[
  aMandalas = 
    If[TrueQ@knownSeedsQ, 
     Association@Table[rs -> BlockRandom[ResourceFunction["RandomMandala"]["RotationalSymmetryOrder" -> 2, "NumberOfSeedElements" -> Automatic, "ConnectingFunction" -> FilledCurve@*BezierCurve], RandomSeeding -> rs], {rs, lsRSeeds}], 
    (*ELSE*) 
     Association@Table[i -> ResourceFunction["RandomMandala"]["RotationalSymmetryOrder" -> 2, "NumberOfSeedElements" -> Automatic, "ConnectingFunction" -> FilledCurve@*BezierCurve], {i, 1, k}] 
    ]; 
 ]

(*{18.7549, Null}*)

Check the number of mandalas generated:

Length[aMandalas]

(*20000*)

Show a sample of the generated mandalas:

Magnify[Multicolumn[MandalaToWhiterImage /@ RandomSample[Values@aMandalas, 40], 10, Background -> Black], 0.7]
1gpblane63eo9

Data preparation

Convert the mandala graphics into images using appropriately large (or appropriately small) image sizes:

AbsoluteTiming[
  aMImages = ParallelMap[ImageResize[#, {120, 120}] &, aMandalas]; 
 ]

(*{248.202, Null}*)

Flatten each of the images into vectors:

AbsoluteTiming[
  aMImageVecs = ParallelMap[Flatten[ImageData[Binarize@ColorNegate@ColorConvert[#, "Grayscale"]]] &, aMImages]; 
 ]

(*{16.0125, Null}*)

Remark: Below those vectors are called image-vectors.

Feature extraction

In this section we use the software monad LSAMon, [AA1, AAp1], to do dimension reduction over a subset of random mandala images.

Remark: Other feature extraction methods can be used through the built-in functions FeatureExtraction and FeatureExtract.

Dimension reduction

Create an LSAMon object and extract image topics using Singular Value Decomposition (SVD) or Independent Component Analysis (ICA), [AAr2]:

SeedRandom[893];
AbsoluteTiming[
  lsaObj = 
    LSAMonUnit[]⟹
     LSAMonSetDocumentTermMatrix[SparseArray[Values@RandomSample[aMImageVecs, UpTo[2000]]]]⟹
     LSAMonApplyTermWeightFunctions["None", "None", "Cosine"]⟹
     LSAMonExtractTopics["NumberOfTopics" -> 40, Method -> "ICA", "MaxSteps" -> 240, "MinNumberOfDocumentsPerTerm" -> 0]⟹
     LSAMonNormalizeMatrixProduct[Normalized -> Left]; 
 ]

(*{16.1871, Null}*)

Show the importance coefficients of the topics (if SVD was used the plot would show the singular values):

ListPlot[Norm /@ SparseArray[lsaObj⟹LSAMonTakeH], Filling -> Axis, PlotRange -> All, PlotTheme -> "Scientific"]
1sy1zsgpxysof

Show the interpretation of the extracted image topics:

lsaObj⟹
   LSAMonNormalizeMatrixProduct[Normalized -> Right]⟹
   LSAMonEchoFunctionContext[ImageAdjust[Image[Partition[#, ImageDimensions[aMImages[[1]]][[1]]]]] & /@ SparseArray[#H] &];
16h8a7jwknnkt

Approximation

Pick a test image that is a mandala image or a target image and pre-process it:

If[True, 
   ind = RandomChoice[Range[Length[Values[aMImages]]]]; 
   imgTest = MandalaToWhiterImage@aMandalas[[ind]]; 
   matImageTest = ToSSparseMatrix[SparseArray@List@ImageToVector[imgTest, ImageDimensions[aMImages[[1]]]], "RowNames" -> Automatic, "ColumnNames" -> Automatic], 
  (*ELSE*) 
   imgTest = Binarize[imgStar2, 0.5]; 
   matImageTest = ToSSparseMatrix[SparseArray@List@ImageToVector[imgTest, ImageDimensions[aMImages[[1]]]], "RowNames" -> Automatic, "ColumnNames" -> Automatic] 
  ];
imgTest
0vlq50ryrw0hl

Find the representation of the test image with the chosen feature extractor (LSAMon object here):

matReprsentation = lsaObj⟹LSAMonRepresentByTopics[matImageTest]⟹LSAMonTakeValue;
lsCoeff = Normal@SparseArray[matReprsentation[[1, All]]];
ListPlot[lsCoeff, Filling -> Axis, PlotRange -> All]
1u57b208thtfz

Show the interpretation of the found representation:

H = SparseArray[lsaObj⟹LSAMonNormalizeMatrixProduct[Normalized -> Right]⟹LSAMonTakeH];
vecReprsentation = lsCoeff . H;
ImageAdjust@Image[Rescale[Partition[vecReprsentation, ImageDimensions[aMImages[[1]]][[1]]]]]
1m7r3b5bx32ow

Recommendations

In this section we utilize the software monad SMRMon, [AAp3], to create a recommender for the random mandala images.

Remark: Instead of the Sparse Matrix Recommender (SMR) object the built-in function Nearest can be used.

Create SSparseMatrix object for all image-vectors:

matImages = ToSSparseMatrix[SparseArray[Values@aMImageVecs], "RowNames" -> Automatic, "ColumnNames" -> Automatic]
029x975bs3q7w

Normalize the rows of the image-vectors matrix:

AbsoluteTiming[
  matPixel = WeightTermsOfSSparseMatrix[matImages, "None", "None", "Cosine"] 
 ]
1k9xucwektmhh

Get the LSA topics matrix:

matH = (lsaObj⟹LSAMonNormalizeMatrixProduct[Normalized -> Right]⟹LSAMonTakeH)
05zsn0o1jyqj6

Find the image topics representation for each image-vector (assuming matH was computed with SVD or ICA):

AbsoluteTiming[
  matTopic = matPixel . Transpose[matH] 
 ]
028u1jz1hgzx9

Here we create a recommender based on the images data (pixels) and extracted image topics (or other image features):

smrObj = 
   SMRMonUnit[]⟹
    SMRMonCreate[<|"Pixel" -> matPixel, "Topic" -> matTopic|>]⟹
    SMRMonApplyNormalizationFunction["Cosine"]⟹
    SMRMonSetTagTypeWeights[<|"Pixel" -> 0.2, "Topic" -> 1|>];

Remark: Note the weights assigned to the pixels and the topics in the recommender object above. Those weights were derived by examining the recommendations results shown below.

Here is the image we want to find most similar mandala images to – the target image:

imgTarget = Binarize[imgStar2, 0.5]
1qdmarfxa5i78

Here is the profile of the target image:

aProf = MakeSMRProfile[lsaObj, imgTarget, ImageDimensions[aMImages[[1]]]];
TakeLargest[aProf, 6]

(*<|"10032-10009-4392" -> 0.298371, "3906-10506-10495" -> 0.240086, "10027-10014-4387" -> 0.156797, "8342-8339-6062" -> 0.133822, "3182-3179-11222" -> 0.131565, "8470-8451-5829" -> 0.128844|>*)

Using the target image profile here we compute the recommendation scores for all mandala images of the recommender:

aRecs = 
   smrObj⟹
    SMRMonRecommendByProfile[aProf, All]⟹
    SMRMonTakeValue;

Here is a plot of the similarity scores:

Row[{ResourceFunction["RecordsSummary"][Values[aRecs]], ListPlot[Values[aRecs], ImageSize -> Medium, PlotRange -> All, PlotTheme -> "Detailed", PlotLabel -> "Similarity scores"]}]
1kdiisj4jg4ut

Here are the closest (nearest neighbor) mandala images:

Multicolumn[Values[ImageAdjust@*ColorNegate /@ aMImages[[ToExpression /@ Take[Keys[aRecs], 48]]]], 12, Background -> Black]
096uazw8izidy

Here are the most distant mandala images:

Multicolumn[Values[ImageAdjust@*ColorNegate /@ aMImages[[ToExpression /@ Take[Keys[aRecs], -48]]]], 12, Background -> Black]
0zb7hf24twij4

Classifier creation and utilization

In this section we:

  • Prepare classifier data
  • Build and examine a classifier using the software monad ClCon, [AA2, AAp2], using appropriate training, testing, and validation data ratios
  • Build a classifier utilizing all training data
  • Generate Bethlehem Star mandalas by filtering mandala candidates with the classifier

As it was mentioned above we prepare the data to build classifiers with by:

  • Selecting top, highest scores recommendations and labeling them with True
  • Selecting bad, low score recommendations and labeling them with False
AbsoluteTiming[
  Block[{
    lsBest = Values@aMandalas[[ToExpression /@ Take[Keys[aRecs], 120]]], 
    lsWorse = Values@aMandalas[[ToExpression /@ Join[Take[Keys[aRecs], -200], RandomSample[Take[Keys[aRecs], {3000, -200}], 200]]]]}, 
   lsTrainingData = 
     Join[
      Map[MandalaToWhiterImage[#, ImageDimensions@aMImages[[1]]] -> True &, lsBest], 
      Map[MandalaToWhiterImage[#, ImageDimensions@aMImages[[1]]] -> False &, lsWorse] 
     ]; 
  ] 
 ]

(*{27.9127, Null}*)

Using ClCon train a classifier and show its performance measures:

clObj = 
   ClConUnit[lsTrainingData]⟹
    ClConSplitData[0.75, 0.2]⟹
    ClConMakeClassifier["NearestNeighbors"]⟹
    ClConClassifierMeasurements⟹
    ClConEchoValue⟹
    ClConClassifierMeasurements["ConfusionMatrixPlot"]⟹
    ClConEchoValue;
0jkfza6x72kb5
03uf3deiz0hsd

Remark: We can re-run the ClCon workflow above several times until we obtain a classifier we want to use.

Train a classifier with all prepared data:

clObj2 = 
   ClConUnit[lsTrainingData]⟹
    ClConSplitData[1, 0.2]⟹
    ClConMakeClassifier["NearestNeighbors"];

Get the classifier function from ClCon object:

cfBStar = clObj2⟹ClConTakeClassifier
0awjjib00ihgg

Here we generate Bethlehem Star mandalas using the classifier trained above:

SeedRandom[2020];
Multicolumn[MandalaToWhiterImage /@ BethlehemMandala[12, cfBStar, 0.87], 6, Background -> Black]
0r37g633mpq0y

Generate Bethlehem Star mandala images utilizing the classifier (with a specified classifier probabilities threshold):

SeedRandom[32];
KeyMap[MandalaToWhiterImage, BethlehemMandala[12, cfBStar, 0.87, "Probabilities" -> True]]
0osesxm4gdvvf

Show unfiltered Bethlehem Star mandala candidates:

SeedRandom[32];
KeyMap[MandalaToWhiterImage, BethlehemMandala[12, cfBStar, 0, "Probabilities" -> True]]
0rr12n6savl9z

Remark: Examine the probabilities in the image-probability associations above – they show that the classifier is “working.“

Here is another set generated Bethlehem Star mandalas using rotational symmetry order 4:

SeedRandom[777];
KeyMap[MandalaToWhiterImage, BethlehemMandala[12, cfBStar, 0.8, "RotationalSymmetryOrder" -> 4, "Probabilities" -> True]]
0rgzjquk4amz4

Remark: Note that although a higher rotational symmetry order is used the highly scored results still seem relevant – they have the features of the target Bethlehem Star images.

References

[AA1] Anton Antonov, “A monad for Latent Semantic Analysis workflows”, (2019), MathematicaForPrediction at WordPress.

[AA2] Anton Antonov, “A monad for classification workflows”, (2018)), MathematicaForPrediction at WordPress.

[MSE1] “Plotting the Star of Bethlehem”, (2020),Mathematica Stack Exchange, question 236499,

[Wk1] Wikipedia entry, Star of Bethlehem.

Packages

[AAr1] Anton Antonov, RandomMandala, (2019), Wolfram Function Repository.

[AAr2] Anton Antonov, IdependentComponentAnalysis, (2019), Wolfram Function Repository.

[AAr3] Anton Antonov, “Simplified Machine Learning Workflows” book, (2019), GitHub/antononcube.

[AAp1] Anton Antonov, Monadic Latent Semantic Analysis Mathematica package, (2017), MathematicaForPrediction at GitHub/antononcube.

[AAp2] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub/antononcube.

[AAp3] Anton Antonov, Monadic Sparse Matrix Recommender Mathematica package, (2018), MathematicaForPrediction at GitHub/antononcube.

Code definitions

urlPart = "https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/";
Get[urlPart <> "MonadicLatentSemanticAnalysis.m"];
Get[urlPart <> "MonadicSparseMatrixRecommender.m"];
Get[urlPart <> "/MonadicContextualClassification.m"];
Clear[MandalaToImage, MandalaToWhiterImage];
MandalaToImage[gr_Graphics, imgSize_ : {120, 120}] := ColorNegate@ImageResize[gr, imgSize];
MandalaToWhiterImage[gr_Graphics, imgSize_ : {120, 120}] := ColorNegate@ImageResize[gr /. GrayLevel[0.25`] -> Black, imgSize];
Clear[ImageToVector];
ImageToVector[img_Image] := Flatten[ImageData[ColorConvert[img, "Grayscale"]]];
ImageToVector[img_Image, imgSize_] := Flatten[ImageData[ColorConvert[ImageResize[img, imgSize], "Grayscale"]]];
ImageToVector[___] := $Failed;
Clear[MakeSMRProfile];
MakeSMRProfile[lsaObj_LSAMon, gr_Graphics, imgSize_] := MakeSMRProfile[lsaObj, {gr}, imgSize];
MakeSMRProfile[lsaObj_LSAMon, lsGrs : {_Graphics}, imgSize_] := MakeSMRProfile[lsaObj, MandalaToWhiterImage[#, imgSize] & /@ lsGrs, imgSize]
MakeSMRProfile[lsaObj_LSAMon, img_Image, imgSize_] := MakeSMRProfile[lsaObj, {img}, imgSize];
MakeSMRProfile[lsaObj_LSAMon, lsImgs : {_Image ..}, imgSize_] := 
   Block[{lsImgVecs, matTest, aProfPixel, aProfTopic}, 
    lsImgVecs = ImageToVector[#, imgSize] & /@ lsImgs; 
    matTest = ToSSparseMatrix[SparseArray[lsImgVecs], "RowNames" -> Automatic, "ColumnNames" -> Automatic]; 
    aProfPixel = ColumnSumsAssociation[lsaObj⟹LSAMonRepresentByTerms[matTest]⟹LSAMonTakeValue]; 
    aProfTopic = ColumnSumsAssociation[lsaObj⟹LSAMonRepresentByTopics[matTest]⟹LSAMonTakeValue]; 
    aProfPixel = Select[aProfPixel, # > 0 &]; 
    aProfTopic = Select[aProfTopic, # > 0 &]; 
    Join[aProfPixel, aProfTopic] 
   ];
MakeSMRProfile[___] := $Failed;
Clear[BethlehemMandalaCandiate];
BethlehemMandalaCandiate[opts : OptionsPattern[]] := ResourceFunction["RandomMandala"][opts, "RotationalSymmetryOrder" -> 2, "NumberOfSeedElements" -> Automatic, "ConnectingFunction" -> FilledCurve@*BezierCurve];
Clear[BethlehemMandala];
Options[BethlehemMandala] = Join[{ImageSize -> {120, 120}, "Probabilities" -> False}, Options[ResourceFunction["RandomMandala"]]];
BethlehemMandala[n_Integer, cf_ClassifierFunction, opts : OptionsPattern[]] := BethlehemMandala[n, cf, 0.87, opts];
BethlehemMandala[n_Integer, cf_ClassifierFunction, threshold_?NumericQ, opts : OptionsPattern[]] := 
   Block[{imgSize, probsQ, res, resNew, aResScores = <||>, aResScoresNew = <||>}, 
     
     imgSize = OptionValue[BethlehemMandala, ImageSize]; 
     probsQ = TrueQ[OptionValue[BethlehemMandala, "Probabilities"]]; 
     
     res = {}; 
     While[Length[res] < n, 
      resNew = Table[BethlehemMandalaCandiate[FilterRules[{opts}, Options[ResourceFunction["RandomMandala"]]]], 2*(n - Length[res])]; 
      aResScoresNew = Association[# -> cf[MandalaToImage[#, imgSize], "Probabilities"][True] & /@ resNew]; 
      aResScoresNew = Select[aResScoresNew, # >= threshold &]; 
      aResScores = Join[aResScores, aResScoresNew]; 
      res = Keys[aResScores] 
     ]; 
     
     aResScores = TakeLargest[ReverseSort[aResScores], UpTo[n]]; 
     If[probsQ, aResScores, Keys[aResScores]] 
    ] /; n > 0;
BethlehemMandala[___] := $Failed

A monad for classification workflows

Introduction

In this document we describe the design and implementation of a (software programming) monad for classification workflows specification and execution. The design and implementation are done with Mathematica / Wolfram Language (WL).

The goal of the monad design is to make the specification of classification workflows (relatively) easy, straightforward, by following a certain main scenario and specifying variations over that scenario.

The monad is named ClCon and it is based on the State monad package "StateMonadCodeGenerator.m", [AAp1, AA1], the classifier ensembles package "ClassifierEnsembles.m", [AAp4, AA2], and the package for Receiver Operating Characteristic (ROC) functions calculation and plotting "ROCFunctions.m", [AAp5, AA2, Wk2].

The data for this document is read from WL’s repository using the package "GetMachineLearningDataset.m", [AAp10].

The monadic programming design is used as a Software Design Pattern. The ClCon monad can be also seen as a Domain Specific Language (DSL) for the specification and programming of machine learning classification workflows.

Here is an example of using the ClCon monad over the Titanic data:

"ClCon-simple-dsTitanic-pipeline"

"ClCon-simple-dsTitanic-pipeline"

The table above is produced with the package "MonadicTracing.m", [AAp2, AA1], and some of the explanations below also utilize that package.

As it was mentioned above the monad ClCon can be seen as a DSL. Because of this the monad pipelines made with ClCon are sometimes called "specifications".

Contents description

The document has the following structure.

  • The sections "Package load" and "Data load" obtain the needed code and data.
    (Needed and put upfront from the "Reproducible research" point of view.)

  • The sections "Design consideration" and "Monad design" provide motivation and design decisions rationale.

  • The sections "ClCon overview" and "Monad elements" provide technical description of the ClCon monad needed to utilize it.
    (Using a fair amount of examples.)

  • The section "Example use cases" gives several more elaborated examples of ClCon that have "real life" flavor.
    (But still didactic and concise enough.)

  • The section "Unit test" describes the tests used in the development of the ClCon monad.
    (The random pipelines unit tests are especially interesting.)

  • The section "Future plans" outlines future directions of development.
    (The most interesting and important one is the "conversational agent" direction.)

  • The section "Implementation notes" has (i) a diagram outlining the ClCon development process, and (ii) a list of observations and morals.
    (Some fairly obvious, but deemed fairly significant and hence stated explicitly.)

Remark: One can read only the sections "Introduction", "Design consideration", "Monad design", and "ClCon overview". That set of sections provide a fairly good, programming language agnostic exposition of the substance and novel ideas of this document.

Package load

The following commands load the packages [AAp1–AAp10, AAp12]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicContextualClassification.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaVsR/master/Projects/ProgressiveMachineLearning/Mathematica/GetMachineLearningDataset.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/UnitTests/MonadicContextualClassificationRandomPipelinesUnitTests.m"]

(*
Importing from GitHub: MathematicaForPredictionUtilities.m
Importing from GitHub: MosaicPlot.m
Importing from GitHub: CrossTabulate.m
Importing from GitHub: StateMonadCodeGenerator.m
Importing from GitHub: ClassifierEnsembles.m
Importing from GitHub: ROCFunctions.m
Importing from GitHub: VariableImportanceByClassifiers.m
Importing from GitHub: SSparseMatrix.m
Importing from GitHub: OutlierIdentifiers.m
*)

Data load

In this section we load data that is used in the rest of the document. The "quick" data is created in order to specify quick, illustrative computations.

Remark: In all datasets the classification labels are in the last column.

The summarization of the data is done through ClCon, which in turn uses the function RecordsSummary from the package "MathematicaForPredictionUtilities.m", [AAp7].

WL resources data

The following commands produce datasets using the package [AAp10] (that utilizes ExampleData):

dsTitanic = GetMachineLearningDataset["Titanic"];
dsMushroom = GetMachineLearningDataset["Mushroom"];
dsWineQuality = GetMachineLearningDataset["WineQuality"];

Here is are the dimensions of the datasets:

Dataset[Dataset[Map[Prepend[Dimensions[ToExpression[#]], #] &, {"dsTitanic", "dsMushroom", "dsWineQuality"}]][All, AssociationThread[{"name", "rows", "columns"}, #] &]]
"ClCon-datasets-dimensions"

"ClCon-datasets-dimensions"

Here is the summary of dsTitanic:

ClConUnit[dsTitanic]⟹ClConSummarizeData["MaxTallies" -> 12];
"ClCon-dsTitanic-summary"

"ClCon-dsTitanic-summary"

Here is the summary of dsMushroom in long form:

ClConUnit[dsMushroom]⟹ClConSummarizeDataLongForm["MaxTallies" -> 12];
"ClCon-dsMushroom-summary"

"ClCon-dsMushroom-summary"

Here is the summary of dsWineQuality in long form:

ClConUnit[dsWineQuality]⟹ClConSummarizeDataLongForm["MaxTallies" -> 12];
"ClCon-dsWineQuality-summary"

"ClCon-dsWineQuality-summary"

"Quick" data

In this subsection we make up some data that is used for illustrative purposes.

SeedRandom[212]
dsData = RandomInteger[{0, 1000}, {100}];
dsData = Dataset[
   Transpose[{dsData, Mod[dsData, 3], Last@*IntegerDigits /@ dsData, ToString[Mod[#, 3]] & /@ dsData}]];
dsData = Dataset[dsData[All, AssociationThread[{"number", "feature1", "feature2", "label"}, #] &]];
Dimensions[dsData]

(* {100, 4} *)

Here is a sample of the data:

RandomSample[dsData, 6]
"ClCon-quick-data-sample"

"ClCon-quick-data-sample"

Here is a summary of the data:

ClConUnit[dsData]⟹ClConSummarizeData;
"ClCon-quick-data-summary-ds"

"ClCon-quick-data-summary-ds"

Here we convert the data into a list of record-label rules (and show the summary):

mlrData = ClConToNormalClassifierData[dsData];
ClConUnit[mlrData]⟹ClConSummarizeData;
"ClCon-quick-data-summary-mlr"

"ClCon-quick-data-summary-mlr"

Finally, we make the array version of the dataset:

arrData = Normal[dsData[All, Values]];

Design considerations

The steps of the main classification workflow addressed in this document follow.

  1. Retrieving data from a data repository.

  2. Optionally, transform the data.

  3. Split data into training and test parts.

    • Optionally, split training data into training and validation parts.
  4. Make a classifier with the training data.

  5. Test the classifier over the test data.

    • Computation of different measures including ROC.

The following diagram shows the steps.

"Classification-workflow-horizontal-layout"

Very often the workflow above is too simple in real situations. Often when making "real world" classifiers we have to experiment with different transformations, different classifier algorithms, and parameters for both transformations and classifiers. Examine the following mind-map that outlines the activities in making competition classifiers.

"Making-competitions-classifiers-mind-map.png"

In view of the mind-map above we can come up with the following flow-chart that is an elaboration on the main, simple workflow flow-chart.

"Classification-workflow-extended.jpg"

In order to address:

  • the introduction of new elements in classification workflows,

  • workflows elements variability, and

  • workflows iterative changes and refining,

it is beneficial to have a DSL for classification workflows. We choose to make such a DSL through a functional programming monad, [Wk1, AA1].

Here is a quote from [Wk1] that fairly well describes why we choose to make a classification workflow monad and hints on the desired properties of such a monad.

[…] The monad represents computations with a sequential structure: a monad defines what it means to chain operations together. This enables the programmer to build pipelines that process data in a series of steps (i.e. a series of actions applied to the data), in which each action is decorated with the additional processing rules provided by the monad. […]

Monads allow a programming style where programs are written by putting together highly composable parts, combining in flexible ways the possible actions that can work on a particular type of data. […]

Remark: Note that quote from [Wk1] refers to chained monadic operations as "pipelines". We use the terms "monad pipeline" and "pipeline" below.

Monad design

The monad we consider is designed to speed-up the programming of classification workflows outlined in the previous section. The monad is named ClCon for "Classification with Context".

We want to be able to construct monad pipelines of the general form:

"ClCon-generic-monad-formula"

"ClCon-generic-monad-formula"

ClCon is based on the State monad, [Wk1, AA1], so the monad pipeline form (1) has the following more specific form:

"ClCon-State-monad-formula"

"ClCon-State-monad-formula"

This means that some monad operations will not just change the pipeline value but they will also change the pipeline context.

In the monad pipelines of ClCon we store different objects in the contexts for at least one of the following two reasons.

  1. The object will be needed later on in the pipeline.

  2. The object is hard to compute.

Such objects are training data, ROC data, and classifiers.

Let us list the desired properties of the monad.

  • Rapid specification of non-trivial classification workflows.

  • The monad works with different data types: Dataset, lists of machine learning rules, full arrays.

  • The pipeline values can be of different types. Most monad functions modify the pipeline value; some modify the context; some just echo results.

  • The monad works with single classifier objects and with classifier ensembles.

    • This means support of different classifier measures and ROC plots for both single classifiers and classifier ensembles.
  • The monad allows of cursory examination and summarization of the data.
    • For insight and in order to verify assumptions.
  • The monad has operations to compute importance of variables.

  • We can easily obtain the pipeline value, context, and different context objects for manipulation outside of the monad.

  • We can calculate classification measures using a specified ROC parameter and a class label.

  • We can easily plot different combinations of ROC functions.

The ClCon components and their interaction are given in the following diagram. (The components correspond to the main workflow given in the previous section.)

"ClCon-components-interaction.jpg"

In the diagram above the operations are given in rectangles. Data objects are given in round corner rectangles and classifier objects are given in round corner squares.

The main ClCon operations implicitly put in the context or utilize from the context the following objects:

  • training data,

  • test data,

  • validation data,

  • classifier (a classifier function or an association of classifier functions),

  • ROC data,

  • variable names list.

Note the that the monadic set of types of ClCon pipeline values is fairly heterogenous and certain awareness of "the current pipeline value" is assumed when composing ClCon pipelines.

Obviously, we can put in the context any object through the generic operations of the State monad of the package "StateMonadGenerator.m", [AAp1].

ClCon overview

When using a monad we lift certain data into the "monad space", using monad’s operations we navigate computations in that space, and at some point we take results from it.

With the approach taken in this document the "lifting" into the ClCon monad is done with the function ClConUnit. Results from the monad can be obtained with the functions ClConTakeValue, ClConContext, or with the other ClCon functions with the prefix "ClConTake" (see below.)

Here is a corresponding diagram of a generic computation with the ClCon monad:

"ClCon-pipeline"

Remark: It is a good idea to compare the diagram with formulas (1) and (2).

Let us examine a concrete ClCon pipeline that corresponds to the diagram above. In the following table each pipeline operation is combined together with a short explanation and the context keys after its execution.

"ClCon-pipeline-TraceMonad-table"

"ClCon-pipeline-TraceMonad-table"

Here is the output of the pipeline:

"ClCon-pipeline-TraceMonad-Echo-output"

"ClCon-pipeline-TraceMonad-Echo-output"

In the specified pipeline computation the last column of the dataset is assumed to be the one with the class labels.

The ClCon functions are separated into four groups:

  • operations,

  • setters,

  • takers,

  • State Monad generic functions.

An overview of the those functions is given in the tables in next two sub-sections. The next section, "Monad elements", gives details and examples for the usage of the ClCon operations.

Monad functions interaction with the pipeline value and context

The following table gives an overview the interaction of the ClCon monad functions with the pipeline value and context.

"ClCon-table-of-operations-setters-takers"

"ClCon-table-of-operations-setters-takers"

Several functions that use ROC data have two rows in the table because they calculate the needed ROC data if it is not available in the monad context.

State monad functions

Here are the ClCon State Monad functions (generated using the prefix "ClCon", [AAp1, AA1]):

"ClCon-StateMonad-functions-table"

"ClCon-StateMonad-functions-table"

Monad elements

In this section we show that ClCon has all of the properties listed in the previous section.

The monad head

The monad head is ClCon. Anything wrapped in ClCon can serve as monad’s pipeline value. It is better though to use the constructor ClConUnit. (Which adheres to the definition in [Wk1].)

ClCon[{{1, "a"}, {2, "b"}}, <||>]⟹ClConSummarizeData;
"ClCon-monad-head-example"

"ClCon-monad-head-example"

Lifting data to the monad

The function lifting the data into the monad ClCon is ClConUnit.

The lifting to the monad marks the beginning of the monadic pipeline. It can be done with data or without data. Examples follow.

ClConUnit[dsData]⟹ClConSummarizeData;
"ClCon-lifting-data-example-1"

"ClCon-lifting-data-example-1"

ClConUnit[]⟹ClConSetTrainingData[dsData]⟹ClConSummarizeData;
"ClCon-lifting-data-example-2"

"ClCon-lifting-data-example-2"

(See the sub-section "Setters and takers" for more details of setting and taking values in ClCon contexts.)

Currently the monad can deal with data in the following forms:

  • datasets,

  • matrices,

  • lists of example->label rules.

The ClCon monad also has the non-monadic function ClConToNormalClassifierData which can be used to convert datasets and matrices to lists of example->label rules. Here is an example:

Short[ClConToNormalClassifierData[dsData], 3]

(*
 {{639, 0, 9} -> "0", {121, 1, 1} -> "1", {309, 0, 9} ->  "0", {648, 0, 8} -> "0", {995, 2, 5} -> "2", {127, 1, 7} -> "1", {908, 2, 8} -> "2", {564, 0, 4} -> "0", {380, 2, 0} -> "2", {860, 2, 0} -> "2",
 <<80>>,
 {464, 2, 4} -> "2", {449, 2, 9} -> "2", {522, 0, 2} -> "0", {288, 0, 8} -> "0", {51, 0, 1} -> "0", {108, 0, 8} -> "0", {76, 1, 6} -> "1", {706, 1, 6} -> "1", {765, 0, 5} -> "0", {195, 0, 5} -> "0"}
*)

When the data lifted to the monad is a dataset or a matrix it is assumed that the last column has the class labels. WL makes it easy to rearrange columns in such a way the any column of dataset or a matrix to be the last.

Data splitting

The splitting is made with ClConSplitData, which takes up to two arguments and options. The first argument specifies the fraction of training data. The second argument — if given — specifies the fraction of the validation part of the training data. If the value of option Method is "LabelsProportional", then the splitting is done in correspondence of the class labels tallies. ("LabelsProportional" is the default value.) Data splitting demonstration examples follow.

Here are the dimensions of the dataset dsData:

Dimensions[dsData]

(* {100, 4} *)

Here we split the data into 70% for training and 30% for testing and then we verify that the corresponding number of rows add to the number of rows of dsData:

val = ClConUnit[dsData]⟹ClConSplitData[0.7]⟹ClConTakeValue;
Map[Dimensions, val]
Total[First /@ %]

(* 
 <|"trainingData" -> {69, 4}, "testData" -> {31, 4}|>
 100 
*)

Note that if Method is not "LabelsProportional" we get slightly different results.

val = ClConUnit[dsData]⟹ClConSplitData[0.7, Method -> "Random"]⟹ClConTakeValue;
Map[Dimensions, val]
Total[First /@ %]

(*
  <|"trainingData" -> {70, 4}, "testData" -> {30, 4}|>
 100 
*)

In the following code we split the data into 70% for training and 30% for testing, then the training data is further split into 90% for training and 10% for classifier training validation; then we verify that the number of rows add up.

val = ClConUnit[dsData]⟹ClConSplitData[0.7, 0.1]⟹ClConTakeValue;
Map[Dimensions, val]
Total[First /@ %]

(*
 <|"trainingData" -> {61, 4}, "testData" -> {31, 4}, "validationData" -> {8, 4}|>
 100
*)

Classifier training

The monad ClCon supports both single classifiers obtained with Classify and classifier ensembles obtained with Classify and managed with the package "ClassifierEnsembles.m", [AAp4].

Single classifier training

With the following pipeline we take the Titanic data, split it into 75/25 % parts, train a Logistic Regression classifier, and finally take that classifier from the monad.

cf =
  ClConUnit[dsTitanic]⟹
   ClConSplitData[0.75]⟹
   ClConMakeClassifier["LogisticRegression"]⟹
   ClConTakeClassifier;

Here is information about the obtained classifier:

ClassifierInformation[cf, "TrainingTime"]

(* Quantity[3.84008, "Seconds"] *)

If we want to pass parameters to the classifier training we can use the Method option. Here we train a Random Forest classifier with 400 trees:

cf =
  ClConUnit[dsTitanic]⟹
   ClConSplitData[0.75]⟹
   ClConMakeClassifier[Method -> {"RandomForest", "TreeNumber" -> 400}]⟹
   ClConTakeClassifier;

ClassifierInformation[cf, "TreeNumber"]

(* 400 *)

Classifier ensemble training

With the following pipeline we take the Titanic data, split it into 75/25 % parts, train a classifier ensemble of three Logistic Regression classifiers and two Nearest Neighbors classifiers using random sampling of 90% of the training data, and finally take that classifier ensemble from the monad.

ensemble =
  ClConUnit[dsTitanic]⟹
   ClConSplitData[0.75]⟹
   ClConMakeClassifier[{{"LogisticRegression", 0.9, 3}, {"NearestNeighbors", 0.9, 2}}]⟹
   ClConTakeClassifier;

The classifier ensemble is simply an association with keys that are automatically assigned names and corresponding values that are classifiers.

ensemble
"ClCon-ensemble-classifier-example-1"

"ClCon-ensemble-classifier-example-1"

Here are the training times of the classifiers in the obtained ensemble:

ClassifierInformation[#, "TrainingTime"] & /@ ensemble

(*
 <|"LogisticRegression[1,0.9]" -> Quantity[3.47836, "Seconds"], 
   "LogisticRegression[2,0.9]" -> Quantity[3.47681, "Seconds"], 
   "LogisticRegression[3,0.9]" -> Quantity[3.4808, "Seconds"], 
   "NearestNeighbors[1,0.9]" -> Quantity[1.82454, "Seconds"], 
   "NearestNeighbors[2,0.9]" -> Quantity[1.83804, "Seconds"]|>
*)

A more precise specification can be given using associations. The specification

<|"method" -> "LogisticRegression", "sampleFraction" -> 0.9, "numberOfClassifiers" -> 3, "samplingFunction" -> RandomChoice|>

says "make three Logistic Regression classifiers, for each taking 90% of the training data using the function RandomChoice."

Here is a pipeline specification equivalent to the pipeline specification above:

ensemble2 =
  ClConUnit[dsTitanic]⟹
   ClConSplitData[0.75]⟹
   ClConMakeClassifier[{
       <|"method" -> "LogisticRegression", 
         "sampleFraction" -> 0.9, 
         "numberOfClassifiers" -> 3, 
         "samplingFunction" -> RandomSample|>, 
       <|"method" -> "NearestNeighbors", 
         "sampleFraction" -> 0.9, 
         "numberOfClassifiers" -> 2, 
         "samplingFunction" -> RandomSample|>}]⟹
   ClConTakeClassifier;

ensemble2
"ClCon-ensemble-classifier-example-2"

"ClCon-ensemble-classifier-example-2"

Classifier testing

Classifier testing is done with the testing data in the context.

Here is a pipeline that takes the Titanic data, splits it, and trains a classifier:

p =
  ClConUnit[dsTitanic]⟹
   ClConSplitData[0.75]⟹
   ClConMakeClassifier["DecisionTree"];

Here is how we compute selected classifier measures:

p⟹
 ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall", "FalsePositiveRate"}]⟹
 ClConTakeValue

(*
 <|"Accuracy" -> 0.792683, 
   "Precision" -> <|"died" -> 0.802691, "survived" -> 0.771429|>, 
   "Recall" -> <|"died" -> 0.881773, "survived" -> 0.648|>, 
   "FalsePositiveRate" -> <|"died" -> 0.352, "survived" -> 0.118227|>|>
*)

(The measures are listed in the function page of ClassifierMeasurements.)

Here we show the confusion matrix plot:

p⟹ClConClassifierMeasurements["ConfusionMatrixPlot"]⟹ClConEchoValue;
"ClCon-classifier-testing-ConfusionMatrixPlot-echo"

"ClCon-classifier-testing-ConfusionMatrixPlot-echo"

Here is how we plot ROC curves by specifying the ROC parameter range and the image size:

p⟹ClConROCPlot["FPR", "TPR", "ROCRange" -> Range[0, 1, 0.1], ImageSize -> 200];
"ClCon-classifier-testing-ROCPlot-echo"

"ClCon-classifier-testing-ROCPlot-echo"

Remark: ClCon uses the package ROCFunctions.m, [AAp5], which implements all functions defined in [Wk2].

Here we plot ROC functions values (y-axis) over the ROC parameter (x-axis):

p⟹ClConROCListLinePlot[{"ACC", "TPR", "FPR", "SPC"}];
ClCon-classifier-testing-ROCListLinePlot-echo

ClCon-classifier-testing-ROCListLinePlot-echo

Note of the "ClConROC*Plot" functions automatically echo the plots. The plots are also made to be the pipeline value. Using the option specification "Echo"->False the automatic echoing of plots can be suppressed. With the option "ClassLabels" we can focus on specific class labels.

p⟹
  ClConROCListLinePlot[{"ACC", "TPR", "FPR", "SPC"}, "Echo" -> False, "ClassLabels" -> "survived", ImageSize -> Medium]⟹
  ClConEchoValue;
"ClCon-classifier-testing-ROCListLinePlot-survived-echo"

"ClCon-classifier-testing-ROCListLinePlot-survived-echo"

Variable importance finding

Using the pipeline constructed above let us find the most decisive variables using systematic random shuffling (as explained in [AA3]):

p⟹
 ClConAccuracyByVariableShuffling⟹
 ClConTakeValue

(*
 <|None -> 0.792683, "id" -> 0.664634, "passengerClass" -> 0.75, "passengerAge" -> 0.777439, "passengerSex" -> 0.612805|>
*)

We deduce that "passengerSex" is the most decisive variable because its corresponding classification success rate is the smallest. (See [AA3] for more details.)

Using the option "ClassLabels" we can focus on specific class labels:

p⟹ClConAccuracyByVariableShuffling["ClassLabels" -> "survived"]⟹ClConTakeValue

(*
 <|None -> {0.771429}, "id" -> {0.595506}, "passengerClass" -> {0.731959}, "passengerAge" -> {0.71028}, "passengerSex" -> {0.414414}|>
*)

Setters and takers

The values from the monad context can be set or obtained with the corresponding "setters" and "takers" functions as summarized in previous section.

For example:

p⟹ClConTakeClassifier

(* ClassifierFunction[__] *) 

Short[Normal[p⟹ClConTakeTrainingData]]

(*
  {<|"id" -> 858, "passengerClass" -> "3rd", "passengerAge" -> 30, "passengerSex" -> "male", "passengerSurvival" -> "survived"|>, <<979>> }
*)

Short[Normal[p⟹ClConTakeTestData]]

(* {<|"id" -> 285, "passengerClass" -> "1st", "passengerAge" -> 60, "passengerSex" -> "female", "passengerSurvival" -> "survived"|> , <<327>> } 
*)

p⟹ClConTakeVariableNames

(* {"id", "passengerClass", "passengerAge", "passengerSex", "passengerSurvival"} *)

If other values are put in the context they can be obtained through the (generic) function ClConTakeContext, [AAp1]:

p = ClConUnit[RandomReal[1, {2, 2}]]⟹ClConAddToContext["data"];

(p⟹ClConTakeContext)["data"]

(* {{0.815836, 0.191562}, {0.396868, 0.284587}} *)

Another generic function from [AAp1] is ClConTakeValue (used many times above.)

Example use cases

Classification with MNIST data

Here we show an example of using ClCon with the reasonably large dataset of images MNIST, [YL1].

mnistData = ExampleData[{"MachineLearning", "MNIST"}, "Data"];

SeedRandom[3423]
p =
  ClConUnit[RandomSample[mnistData, 20000]]⟹
   ClConSplitData[0.7]⟹
   ClConSummarizeData⟹
   ClConMakeClassifier["NearestNeighbors"]⟹
   ClConClassifierMeasurements[{"Accuracy", "ConfusionMatrixPlot"}]⟹
   ClConEchoValue;
"ClCon-MNIST-example-output"

"ClCon-MNIST-example-output"

Here we plot the ROC curve for a specified digit:

p⟹ClConROCPlot["ClassLabels" -> 5];

Conditional continuation

In this sub-section we show how the computations in a ClCon pipeline can be stopped or continued based on a certain condition.

The pipeline below makes a simple classifier ("LogisticRegression") for the WineQuality data, and if the recall for the important label ("high") is not large enough makes a more complicated classifier ("RandomForest"). The pipeline marks intermediate steps by echoing outcomes and messages.

SeedRandom[267]
res =
  ClConUnit[dsWineQuality[All, Join[#, <|"wineQuality" -> If[#wineQuality >= 7, "high", "low"]|>] &]]⟹
   ClConSplitData[0.75, 0.2]⟹
   ClConSummarizeData(* summarize the data *)⟹
   ClConMakeClassifier[Method -> "LogisticRegression"](* training a simple classifier *)⟹
   ClConROCPlot["FPR", "TPR", "ROCPointCallouts" -> False]⟹
   ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall", "FalsePositiveRate"}]⟹
   ClConEchoValue⟹
   ClConIfElse[#["Recall", "high"] > 0.70 & (* criteria based on the recall for "high" *),
    ClConEcho["Good recall for \"high\"!", "Success:"],
    ClConUnit[##]⟹
      ClConEcho[Style["Recall for \"high\" not good enough... making a large random forest.", Darker[Red]], "Info:"]⟹
      ClConMakeClassifier[Method -> {"RandomForest", "TreeNumber" -> 400}](* training a complicated classifier *)⟹
      ClConROCPlot["FPR", "TPR", "ROCPointCallouts" -> False]⟹
      ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall", "FalsePositiveRate"}]⟹
      ClConEchoValue &];
"ClCon-conditional-continuation-example-output"

"ClCon-conditional-continuation-example-output"

We can see that the recall with the more complicated is classifier is higher. Also the ROC plots of the second classifier are visibly closer to the ideal one. Still, the recall is not good enough, we have to find a threshold that is better that the default one. (See the next sub-section.)

Classification with custom thresholds

(In this sub-section we use the monad from the previous sub-section.)

Here we compute classification measures using the threshold 0.3 for the important class label ("high"):

res⟹
 ClConClassifierMeasurementsByThreshold[{"Accuracy", "Precision", "Recall", "FalsePositiveRate"}, "high" -> 0.3]⟹
 ClConTakeValue

(* <|"Accuracy" -> 0.782857,  "Precision" -> <|"high" -> 0.498871, "low" -> 0.943734|>, 
     "Recall" -> <|"high" -> 0.833962, "low" -> 0.76875|>, 
     "FalsePositiveRate" -> <|"high" -> 0.23125, "low" -> 0.166038|>|> *)

We can see that the recall for "high" is fairly large and the rest of the measures have satisfactory values. (The accuracy did not drop that much, and the false positive rate is not that large.)

Here we compute suggestions for the best thresholds:

res (* start with a previous monad *)⟹
  ClConROCPlot[ImageSize -> 300] (* make ROC plots *)⟹
  ClConSuggestROCThresholds[3] (* find the best 3 thresholds per class label *)⟹
  ClConEchoValue (* echo the result *);
"ClCon-best-thresholds-example-output"

"ClCon-best-thresholds-example-output"

The suggestions are the ROC points that closest to the point {0, 1} (which corresponds to the ideal classifier.)

Here is a way to use threshold suggestions within the monad pipeline:

res⟹
  ClConSuggestROCThresholds⟹
  ClConEchoValue⟹
  (ClConUnit[##]⟹
    ClConClassifierMeasurementsByThreshold[{"Accuracy", "Precision", "Recall"}, "high" -> First[#1["high"]]] &)⟹
  ClConEchoValue;

(*
value: <|high->{0.35},low->{0.65}|>
value: <|Accuracy->0.825306,Precision-><|high->0.571831,low->0.928736|>,Recall-><|high->0.766038,low->0.841667|>|> 
*)

Unit tests

The development of ClCon was done with two types of unit tests: (1) directly specified tests, [AAp11], and (2) tests based on randomly generated pipelines, [AAp12].

Both unit test packages should be further extended in order to provide better coverage of the functionalities and illustrate — and postulate — pipeline behavior.

Directly specified tests

Here we run the unit tests file "MonadicContextualClassification-Unit-Tests.wlt", [AAp11]:

AbsoluteTiming[
 testObject = TestReport["~/MathematicaForPrediction/UnitTests/MonadicContextualClassification-Unit-Tests.wlt"]
]
"ClCon-direct-unit-tests-TestReport-icon"

"ClCon-direct-unit-tests-TestReport-icon"

The natural language derived test ID’s should give a fairly good idea of the functionalities covered in [AAp11].

Values[Map[#["TestID"] &, testObject["TestResults"]]]

(* {"LoadPackage", "EvenOddDataset", "EvenOddDataMLRules", \
"DataToContext-no-[]", "DataToContext-with-[]", \
"ClassifierMaking-with-Dataset-1", "ClassifierMaking-with-MLRules-1", \
"AccuracyByVariableShuffling-1", "ROCData-1", \
"ClassifierEnsemble-different-methods-1", \
"ClassifierEnsemble-different-methods-2-cont", \
"ClassifierEnsemble-different-methods-3-cont", \
"ClassifierEnsemble-one-method-1", "ClassifierEnsemble-one-method-2", \
"ClassifierEnsemble-one-method-3-cont", \
"ClassifierEnsemble-one-method-4-cont", "AssignVariableNames-1", \
"AssignVariableNames-2", "AssignVariableNames-3", "SplitData-1", \
"Set-and-take-training-data", "Set-and-take-test-data", \
"Set-and-take-validation-data", "Partial-data-summaries-1", \
"Assign-variable-names-1", "Split-data-100-pct", \
"MakeClassifier-with-empty-unit-1", \
"No-rocData-after-second-MakeClassifier-1"} *)

Random pipelines tests

Since the monad ClCon is a DSL it is natural to test it with a large number of randomly generated "sentences" of that DSL. For the ClCon DSL the sentences are ClCon pipelines. The package "MonadicContextualClassificationRandomPipelinesUnitTests.m", [AAp12], has functions for generation of ClCon random pipelines and running them as verification tests. A short example follows.

Generate pipelines:

SeedRandom[234]
pipelines = MakeClConRandomPipelines[300];
Length[pipelines]

(* 300 *)

Here is a sample of the generated pipelines:

Block[{DoubleLongRightArrow, pipelines = RandomSample[pipelines, 6]}, 
 Clear[DoubleLongRightArrow];
 pipelines = pipelines /. {_Dataset -> "ds", _?DataRulesForClassifyQ -> "mlrData"};
 GridTableForm[
  Map[List@ToString[DoubleLongRightArrow @@ #, FormatType -> StandardForm] &, pipelines], 
  TableHeadings -> {"pipeline"}]
]
AutoCollapse[]
"ClCon-random-pipelines-tests-sample-table"

"ClCon-random-pipelines-tests-sample-table"

Here we run the pipelines as unit tests:

AbsoluteTiming[
 res = TestRunClConPipelines[pipelines, "Echo" -> True];
]

(* {350.083, Null} *)

From the test report results we see that a dozen tests failed with messages, all of the rest passed.

rpTRObj = TestReport[res]
"ClCon-random-pipelines-TestReport-icon"

"ClCon-random-pipelines-TestReport-icon"

(The message failures, of course, have to be examined — some bugs were found in that way. Currently the actual test messages are expected.)

Future plans

Workflow operations

Outliers

Better outliers finding and manipulation incorporation in ClCon. Currently only outlier finding is surfaced in [AAp3]. (The package internally has other related functions.)

ClConUnit[dsTitanic[Select[#passengerSex == "female" &]]]⟹
 ClConOutlierPosition⟹
 ClConTakeValue

(* {4, 17, 21, 22, 25, 29, 38, 39, 41, 59} *)

Dimension reduction

Support of dimension reduction application — quick construction of pipelines that allow the applying different dimension reduction methods.

Currently with ClCon dimension reduction is applied only to data the non-label parts of which can be easily converted into numerical matrices.

ClConUnit[dsWineQuality]⟹
  ClConSplitData[0.7]⟹
  ClConReduceDimension[2, "Echo" -> True]⟹
  ClConRetrieveFromContext["svdRes"]⟹
  ClConEchoFunctionValue["SVD dimensions:", Dimensions /@ # &]⟹
  ClConSummarizeData;
"ClCon-dimension-reduction-example-echo"

"ClCon-dimension-reduction-example-echo"

Conversational agent

Using the packages [AAp13, AAp15] we can generate ClCon pipelines with natural commands. The plan is to develop and document those functionalities further.

Implementation notes

The ClCon package, MonadicContextualClassification.m, [AAp3], is based on the packages [AAp1, AAp4-AAp9]. It was developed using Mathematica and the Mathematica plug-in for IntelliJ IDEA, by Patrick Scheibe , [PS1]. The following diagram shows the development workflow.

"ClCon-development-cycle"

Some observations and morals follow.

  • Making the unit tests [AAp11] made the final implementation stage much more comfortable.
    • Of course, in retrospect that is obvious.
  • Initially "MonadicContextualClassification.m" was not real a package, just a collection of global context functions with the prefix "ClCon". This made some programming design decisions harder, slower, and more cumbersome. By making a proper package the development became much easier because of the "peace of mind" brought by the context feature encapsulation.
  • The making of random pipeline tests, [AAp12], helped catch a fair amount of inconvenient "features" and bugs.
    • (Both tests sets [AAp11, AAp12] can be made to be more comprehensive.)
  • The design of a conversational agent for producing ClCon pipelines with natural language commands brought a very fruitful viewpoint on the overall functionalities and the determination and limits of the ClCon development goals. See [AAp13, AAp14, AAp15].

  • "Eat your own dog food", or in this case: "use ClCon functionalities to implement ClCon functionalities."

    • Since we are developing a DSL it is natural to use that DSL for its own advancement.

    • Again, in retrospect that is obvious. Also probably should be seen as a consequence of practicing a certain code refactoring discipline.

    • The reason to list that moral is that often it is somewhat "easier" to implement functionalities thinking locally, ad-hoc, forgetting or not reviewing other, already implemented functions.

  • In order come be better design and find inconsistencies: write many pipelines and discuss with co-workers.

    • This is obvious. I would like to mention that a somewhat good alternative to discussions is (i) writing this document and related ones and (ii) making, running, and examining of the random pipelines tests.

References

Packages

[AAp1] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

[AAp2] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

[AAp3] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

[AAp4] Anton Antonov, Classifier ensembles functions Mathematica package, (2016), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m .

[AAp5] Anton Antonov, Receiver operating characteristic functions Mathematica package, (2016), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/ROCFunctions.m .

[AAp6] Anton Antonov, Variable importance determination by classifiers implementation in Mathematica,(2015), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/VariableImportanceByClassifiers.m .

[AAp7] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

[AAp8] Anton Antonov, Cross tabulation implementation in Mathematica, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/CrossTabulate.m .

[AAp9] Anton Antonov, SSparseMatrix Mathematica package, (2018), MathematicaForPrediction at GitHub.

[AAp10] Anton Antonov, Obtain and transform Mathematica machine learning data-sets, (2018), MathematicaVsR at GitHub.

[AAp11] Anton Antonov, Monadic contextual classification Mathematica unit tests, (2018), MathematicaVsR at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassification-Unit-Tests.wlt .

[AAp12] Anton Antonov, Monadic contextual classification random pipelines Mathematica unit tests, (2018), MathematicaVsR at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassificationRandomPipelinesUnitTests.m .

ConverationalAgents Packages

[AAp13] Anton Antonov, Classifier workflows grammar in EBNF, (2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

[AAp14] Anton Antonov, Classifier workflows grammar Mathematica unit tests, (2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

[AAp15] Anton Antonov, ClCon translator Mathematica package, (2018), ConversationalAgents at GitHub, https://github.com/antononcube/ConversationalAgents.

MathematicaForPrediction articles

[AA1] Anton Antonov, Monad code generation and extension, (2017), MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction.

[AA2] Anton Antonov, "ROC for classifier ensembles, bootstrapping, damaging, and interpolation", (2016), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/ .

[AA3] Anton Antonov, "Importance of variables investigation guide", (2016), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Importance-of-variables-investigation-guide.md .

Other

[Wk1] Wikipedia entry, Monad, URL: https://en.wikipedia.org/wiki/Monad_(functional_programming) .

[Wk2] Wikipedia entry, Receiver operating characteristic, URL: https://en.wikipedia.org/wiki/Receiver_operating_characteristic .

[YL1] Yann LeCun et al., MNIST database site. URL: http://yann.lecun.com/exdb/mnist/ .

[PS1] Patrick Scheibe, Mathematica (Wolfram Language) support for IntelliJ IDEA, (2013-2018), Mathematica-IntelliJ-Plugin at GitHub. URL: https://github.com/halirutan/Mathematica-IntelliJ-Plugin .

Progressive Machine Learning Examples

Introduction

In this MathematicaVsR at GitHub project we show how to do Progressive machine learning using two types of classifiers based on:

  • Tries with Frequencies, [AAp2, AAp3, AA1],

  • Sparse Matrix Recommender framework [AAp4, AA2].

Progressive learning is a type of Online machine learning. For more details see [Wk1]. The Progressive learning problem is defined as follows.

Problem definition:

  • Assume that the data is sequentially available.
    • Meaning, at a given time only part of the data is available, and after a certain time interval new data can be obtained.

    • In view of classification, it is assumed that at a given time not all class labels are presented in the data already obtained.

    • Let us call this a data stream.

  • Make a machine learning algorithm that updates its model continuously or sequentially in time over a given data stream.

    • Let us call such an algorithm a Progressive Learning Algorithm (PLA).

In comparison, the typical (classical) machine learning algorithms assume that representative training data is available and after training that data is no longer needed to make predictions. Progressive machine learning has more general assumptions about the data and its problem formulation is closer to how humans learn to classify objects.

Below we are shown the applications of two types of classifiers as PLA’s. One is based on Tries with Frequencies (TF), [AAp2, AAp3, AA1], the other on an Item-item Recommender (IIR) framework [AAp4, AA2].

Remark: Note that both TF and IIR come from tackling Unsupervised machine learning tasks, but here they are applied in the context of Supervised machine learning.

General workflow

The Mathematica and R notebooks follow the steps in the following flow chart.

"Progressive-machine-learning-with-Tries"

For detailed explanations see any of the notebooks.

Project organization

Mathematica files

R files

Example runs

(For details see Progressive-machine-learning-examples.md.)

Using Tries with Frequencies

Here is an example run with Tries with Frequencies, [AAp2, AA1]:

"PLA-Trie-run"

Here are the obtained ROC curves:

"PLA-Trie-ROCs-thresholds"

We can see that with the Progressive learning process does improve its success rates in time.

Using an Item-item recommender system

Here is an example run with an Item-item recommender system, [AAp4, AA2]:

"PLA-SMR-run"

Here are the obtained ROC curves:

"PLA-SMR-ROCs-thresholds"

References

Packages

[AAp1] Anton Antonov, Obtain and transform Mathematica machine learning data-sets, GetMachineLearningDataset.m, (2018), MathematicaVsR at GitHub.

[AAp2] Anton Antonov, Java tries with frequencies Mathematica package, JavaTriesWithFrequencies.m, (2017), MathematicaForPrediction at GitHub.

[AAp3] Anton Antonov, Tries with frequencies R package, TriesWithFrequencies.R, (2014), MathematicaForPrediction at GitHub.

[AAp4] Anton Antonov, Sparse matrix recommender framework in Mathematica, SparseMatrixRecommenderFramework.m, (2014), MathematicaForPrediction at GitHub.

Articles

[Wk1] Wikipedia entry, Online machine learning.

[AA1] Anton Antonov, "Tries with frequencies in Java", (2017), MathematicaForPrediction at WordPress.

[AA2] Anton Antonov, "A Fast and Agile Item-Item Recommender: Design and Implementation", (2011), Wolfram Technology Conference 2011.

Text analysis of Trump tweets

Introduction

This post is to proclaim the MathematicaVsR at GitHub project “Text analysis of Trump tweets” in which we compare Mathematica and R over text analyses of Twitter messages made by Donald Trump (and his staff) before the USA president elections in 2016.

The project follows and extends the exposition and analysis of the R-based blog post "Text analysis of Trump’s tweets confirms he writes only the (angrier) Android half" by David Robinson at VarianceExplained.org; see [1].

The blog post [1] links to several sources that claim that during the election campaign Donald Trump tweeted from his Android phone and his campaign staff tweeted from an iPhone. The blog post [1] examines this hypothesis in a quantitative way (using various R packages.)

The hypothesis in question is well summarized with the tweet:

Every non-hyperbolic tweet is from iPhone (his staff).
Every hyperbolic tweet is from Android (from him). pic.twitter.com/GWr6D8h5ed
— Todd Vaziri (@tvaziri) August 6, 2016

This conjecture is fairly well supported by the following mosaic plots, [2]:

TextAnalysisOfTrumpTweets-iPhone-MosaicPlot-Sentiment-Device TextAnalysisOfTrumpTweets-iPhone-MosaicPlot-Device-Weekday-Sentiment

We can see the that Twitter messages from iPhone are much more likely to be neutral, and the ones from Android are much more polarized. As Christian Rudder (one of the founders of OkCupid, a dating website) explains in the chapter "Death by a Thousand Mehs" of the book "Dataclysm", [3], having a polarizing image (online persona) is a very good strategy to engage online audience:

[…] And the effect isn’t small — being highly polarizing will in fact get you about 70 percent more messages. That means variance allows you to effectively jump several "leagues" up in the dating pecking order — […]

(The mosaic plots above were made for the Mathematica-part of this project. Mosaic plots and weekday tags are not used in [1].)

Concrete steps

The Mathematica-part of this project does not follow closely the blog post [1]. After the ingestion of the data provided in [1], the Mathematica-part applies alternative algorithms to support and extend the analysis in [1].

The sections in the R-part notebook correspond to some — not all — of the sections in the Mathematica-part.

The following list of steps is for the Mathematica-part.

  1. Data ingestion
    • The blog post [1] shows how to do in R the ingestion of Twitter data of Donald Trump messages.

    • That can be done in Mathematica too using the built-in function ServiceConnect, but that is not necessary since [1] provides a link to the ingested data used [1]:
      load(url("http://varianceexplained.org/files/trump_tweets_df.rda&quot;))

    • Which leads to the ingesting of an R data frame in the Mathematica-part using RLink.

  2. Adding tags

    • We have to extract device tags for the messages — each message is associated with one of the tags "Android", "iPad", or "iPhone".

    • Using the message time-stamps each message is associated with time tags corresponding to the creation time month, hour, weekday, etc.

    • Here is summary of the data at this stage:

    "trumpTweetsTbl-Summary"

  3. Time series and time related distributions

    • We can make several types of time series plots for general insight and to support the main conjecture.

    • Here is a Mathematica made plot for the same statistic computed in [1] that shows differences in tweet posting behavior:

    "TimeSeries"

    • Here are distributions plots of tweets per weekday:

    "ViolinPlots"

  4. Classification into sentiments and Facebook topics

    • Using the built-in classifiers of Mathematica each tweet message is associated with a sentiment tag and a Facebook topic tag.

    • In [1] the results of this step are derived in several stages.

    • Here is a mosaic plot for conditional probabilities of devices, topics, and sentiments:

    "Device-Topic-Sentiment-MosaicPlot"

  5. Device-word association rules

    • Using Association rule learning device tags are associated with words in the tweets.

    • In the Mathematica-part these associations rules are not needed for the sentiment analysis (because of the built-in classifiers.)

    • The association rule mining is done mostly to support and extend the text analysis in [1] and, of course, for comparison purposes.

    • Here is an example of derived association rules together with their most important measures:

    "iPhone-Association-Rules"

In [1] the sentiments are derived from computed device-word associations, so in [1] the order of steps is 1-2-3-5-4. In Mathematica we do not need the steps 3 and 5 in order to get the sentiments in the 4th step.

Comparison

Using Mathematica for sentiment analysis is much more direct because of the built-in classifiers.

The R-based blog post [1] uses heavily the "pipeline" operator %>% which is kind of a recent addition to R (and it is both fashionable and convenient to use it.) In Mathematica the related operators are Postfix (//), Prefix (@), Infix (~~), Composition (@*), and RightComposition (/*).

Making the time series plots with the R package "ggplot2" requires making special data frames. I am inclined to think that the Mathematica plotting of time series is more direct, but for this task the data wrangling codes in Mathematica and R are fairly comparable.

Generally speaking, the R package "arules" — used in this project for Associations rule learning — is somewhat awkward to use:

  • it is data frame centric, does not work directly with lists of lists, and

  • requires the use of factors.

The Apriori implementation in “arules” is much faster than the one in “AprioriAlgorithm.m” — “arules” uses a more efficient algorithm implemented in C.

References

[1] David Robinson, "Text analysis of Trump’s tweets confirms he writes only the (angrier) Android half", (2016), VarianceExplained.org.

[2] Anton Antonov, "Mosaic plots for data visualization", (2014), MathematicaForPrediction at WordPress.

[3] Christian Rudder, Dataclysm, Crown, 2014. ASIN: B00J1IQUX8 .

Handwritten digits recognition by matrix factorization

Introduction

This MathematicaVsR at GitHub project is for comparing Mathematica and R for the tasks of classifier creation, execution, and evaluation using the MNIST database of images of handwritten digits.

Here are the bases built with two different classifiers:

  • Singular Value Decomposition (SVD)

SVD-basis-for-5

  • Non-Negative Matrix Factorization (NNMF)

NNMF-basis-for-5

Here are the confusion matrices of the two classifiers:

  • SVD

SVD-confusion-matrix

  • NNMF

NNMF-confusion-matrix

The blog post "Classification of handwritten digits" (published 2013) has a related more elaborated discussion over a much smaller database of handwritten digits.

Concrete steps

The concrete steps taken in scripts and documents of this project follow.

  1. Ingest the binary data files into arrays that can be visualized as digit images.
  • We have two sets: 60,000 training images and 10,000 testing images.
  1. Make a linear vector space representation of the images by simple unfolding.

  2. For each digit find the corresponding representation matrix and factorize it.

  3. Store the matrix factorization results in a suitable data structure. (These results comprise the classifier training.)

  • One of the matrix factors is seen as a new basis.
  1. For a given test image (and its linear vector space representation) find the basis that approximates it best. The corresponding digit is the classifier prediction for the given test image.

  2. Evaluate the classifier(s) over all test images and compute accuracy, F-Scores, and other measures.

Scripts

There are scripts going through the steps listed above:

Documents

The following documents give expositions that are suitable for reading and following of steps and corresponding results.

Observations

Ingestion

I figured out first in R how to ingest the data in the binary files of the MNIST database. There were at least several online resources (blog posts, GitHub repositories) that discuss the MNIST binary files ingestion.

After that making the corresponding code in Mathematica was easy.

Classification results

Same in Mathematica and R for for SVD and NNMF. (As expected.)

NNMF

NNMF classifiers use the MathematicaForPrediction at GitHub implementations: NonNegativeMatrixFactorization.m and NonNegativeMatrixFactorization.R.

Parallel computations

Both Mathematica and R have relatively simple set-up of parallel computations.

Graphics

It was not very straightforward to come up in R with visualizations for MNIST images. The Mathematica visualization is much more flexible when it comes to plot labeling.

Going further

Comparison with other classifiers

Using Mathematica’s built-in classifiers it was easy to compare the SVD and NNMF classifiers with neural network ones and others. (The SVD and NNMF are much faster to built and they bring comparable precision.)

It would be nice to repeat that in R using one or several of the neural network classifiers provided by Google, Microsoft, H2O, Baidu, etc.

Classifier ensembles

Another possible extension is to use classifier ensembles and Receiver Operation Characteristic (ROC) to create better classifiers. (Both in Mathematica and R.)

Importance of variables

Using classifier agnostic importance of variables procedure we can figure out :

  • which NNMF basis vectors (images) are most important for the classification precision,

  • which image rows or columns are most important for each digit, or similarly

  • which image squares of a, say, 4×4 image grid are most important.