Re-exploring the structure of Chinese character images

Introduction

In this notebook we show information retrieval and clustering
techniques over images of Unicode collection of Chinese characters. Here
is the outline of notebook’s exposition:

  1. Get Chinese character images.
  2. Cluster “image vectors” and demonstrate that the obtained
    clusters have certain explainability elements.
  3. Apply Latent Semantic Analysis (LSA) workflow to the character
    set.
  4. Show visual thesaurus through a recommender system. (That uses
    Cosine similarity.)
  5. Discuss graph and hierarchical clustering using LSA matrix
    factors.
  6. Demonstrate approximation of “unseen” character images with an
    image basis obtained through LSA over a small set of (simple)
    images.
  7. Redo character approximation with more “interpretable” image
    basis.

Remark: This notebook started as an (extended)
comment for the Community discussion “Exploring
structure of Chinese characters through image processing”
, [SH1].
(Hence the title.)

Get Chinese character images

This code is a copy of the code in the original
Community post by Silvia Hao
, [SH1]:

0zu4hv95x0jjf
Module[{fsize = 50, width = 64, height = 64}, 
  lsCharIDs = Map[FromCharacterCode[#, "Unicode"] &, 16^^4E00 - 1 + Range[width height]]; 
 ]
charPage = Module[{fsize = 50, width = 64, height = 64}, 
    16^^4E00 - 1 + Range[width height] // pipe[
      FromCharacterCode[#, "Unicode"] & 
      , Characters, Partition[#, width] & 
      , Grid[#, Background -> Black, Spacings -> {0, 0}, ItemSize -> {1.5, 1.2}, Alignment -> {Center, Center}, Frame -> All, FrameStyle -> Directive[Red, AbsoluteThickness[3 \[Lambda]]]] & 
      , Style[#, White, fsize, FontFamily -> "Source Han Sans CN", FontWeight -> "ExtraLight"] & 
      , Rasterize[#, Background -> Black] & 
     ] 
   ];
chargrid = charPage // ColorDistance[#, Red] & // Image[#, "Byte"] & // Sign //Erosion[#, 5] &;
lmat = chargrid // MorphologicalComponents[#, Method -> "BoundingBox", CornerNeighbors -> False] &;
chars = ComponentMeasurements[{charPage // ColorConvert[#, "Grayscale"] &, lmat}, "MaskedImage", #Width > 10 &] // Values // Map@RemoveAlphaChannel;
chars = Module[{size = chars // Map@ImageDimensions // Max}, ImageCrop[#, {size, size}] & /@ chars];

Here is a sample of the obtained images:

SeedRandom[33];
RandomSample[chars, 5]
1jy9voh5c01lt

Vector representation of
images

Define a function that represents an image into a linear vector space
(of pixels):

Clear[ImageToVector];
ImageToVector[img_Image] := Flatten[ImageData[ColorConvert[img, "Grayscale"]]];
ImageToVector[img_Image, imgSize_] := Flatten[ImageData[ColorConvert[ImageResize[img, imgSize], "Grayscale"]]];
ImageToVector[___] := $Failed;

Show how vector represented images look like:

Table[BlockRandom[
   img = RandomChoice[chars]; 
   ListPlot[ImageToVector[img], Filling -> Axis, PlotRange -> All, PlotLabel -> img, ImageSize -> Medium, AspectRatio -> 1/6], 
   RandomSeeding -> rs], {rs, {33, 998}}]
0cobk7b0m9xcn
\[AliasDelimiter]

Data preparation

In this section we represent the images into a linear vector space.
(In which each pixel is a basis vector.)

Make an association with images:

aCImages = AssociationThread[lsCharIDs -> chars];
Length[aCImages]

(*4096*)

Make flat vectors with the images:

AbsoluteTiming[
  aCImageVecs = ParallelMap[ImageToVector, aCImages]; 
 ]

(*{0.998162, Null}*)

Do matrix plots a random sample of the image vectors:

SeedRandom[32];
MatrixPlot[Partition[#, ImageDimensions[aCImages[[1]]][[2]]]] & /@ RandomSample[aCImageVecs, 6]
07tn6wh5t97j4

Clustering over the image
vectors

In this section we cluster “image vectors” and demonstrate that the
obtained clusters have certain explainability elements. Expected Chinese
character radicals are observed using image multiplication.

Cluster the image vectors and show summary of the clusters
lengths:

SparseArray[Values@aCImageVecs]
1n5cwcrgj2d3m
SeedRandom[334];
AbsoluteTiming[
  lsClusters = FindClusters[SparseArray[Values@aCImageVecs] -> Keys[aCImageVecs], 35, Method -> {"KMeans"}]; 
 ]
Length@lsClusters
ResourceFunction["RecordsSummary"][Length /@ lsClusters]

(*{24.6383, Null}*)

(*35*)
0lvt8mcfzpvhg

For each cluster:

  • Take 30 different small samples of 7 images
  • Multiply the images in each small sample
  • Show three “most black” the multiplication results
SeedRandom[33];
Table[i -> TakeLargestBy[Table[ImageMultiply @@ RandomSample[KeyTake[aCImages, lsClusters[[i]]], UpTo[7]], 30], Total@ImageToVector[#] &, 3], {i, Length[lsClusters]}]
0erc719h7lnzi

Remark: We can see that the clustering above
produced “semantic” clusters – most of the multiplied images show
meaningful Chinese characters radicals and their “expected
positions.”

Here is one of the clusters with the radical “mouth”:

KeyTake[aCImages, lsClusters[[26]]]
131vpq9dabrjo

LSAMon application

In this section we apply the “standard” LSA workflow, [AA1, AA4].

Make a matrix with named rows and columns from the image vectors:

mat = ToSSparseMatrix[SparseArray[Values@aCImageVecs], "RowNames" -> Keys[aCImageVecs], "ColumnNames" -> Automatic]
0jdmyfb9rsobz

The following Latent Semantic Analysis (LSA) monadic pipeline is used
in [AA2, AA2]:

SeedRandom[77];
AbsoluteTiming[
  lsaAllObj = 
    LSAMonUnit[]\[DoubleLongRightArrow]
     LSAMonSetDocumentTermMatrix[mat]\[DoubleLongRightArrow]
     LSAMonApplyTermWeightFunctions["None", "None", "Cosine"]\[DoubleLongRightArrow]
     LSAMonExtractTopics["NumberOfTopics" -> 60, Method -> "SVD", "MaxSteps" -> 15, "MinNumberOfDocumentsPerTerm" -> 0]\[DoubleLongRightArrow]
     LSAMonNormalizeMatrixProduct[Normalized -> Right]\[DoubleLongRightArrow]
     LSAMonEcho[Style["Obtained basis:", Bold, Purple]]\[DoubleLongRightArrow]
     LSAMonEchoFunctionContext[ImageAdjust[Image[Partition[#, ImageDimensions[aCImages[[1]]][[1]]]]] & /@SparseArray[#H] &]; 
 ]
088nutsaye7yl
0j7joulwrnj30
(*{7.60828, Null}*)

Remark: LSAMon’s corresponding theory and design are
discussed in [AA1, AA4]:

Get the representation matrix:

W2 = lsaAllObj\[DoubleLongRightArrow]LSAMonNormalizeMatrixProduct[Normalized -> Right]\[DoubleLongRightArrow]LSAMonTakeW
1nno5c4wmc83q

Get the topics matrix:

H = lsaAllObj\[DoubleLongRightArrow]LSAMonNormalizeMatrixProduct[Normalized -> Right]\[DoubleLongRightArrow]LSAMonTakeH
1gtqe0ihshi9s

Cluster the reduced dimension
representations
and show summary of the clusters
lengths:

AbsoluteTiming[
  lsClusters = FindClusters[Normal[SparseArray[W2]] -> RowNames[W2], 40, Method -> {"KMeans"}]; 
 ]
Length@lsClusters
ResourceFunction["RecordsSummary"][Length /@ lsClusters]

(*{2.33331, Null}*)

(*40*)
1bu5h88uiet3e

Show cluster interpretations:

AbsoluteTiming[aAutoRadicals = Association@Table[i -> TakeLargestBy[Table[ImageMultiply @@ RandomSample[KeyTake[aCImages, lsClusters[[i]]], UpTo[8]], 30], Total@ImageToVector[#] &, 3], {i, Length[lsClusters]}]; 
 ]
aAutoRadicals

(*{0.878406, Null}*)
05re59k8t4u4u

Using FeatureExtraction

I experimented with clustering and approximation using WL’s function
FeatureExtraction.
Result are fairly similar as the above; timings a different (a few times
slower.)

Visual thesaurus

In this section we use Cosine similarity to find visual nearest
neighbors of Chinese character images.

matPixels = WeightTermsOfSSparseMatrix[lsaAllObj\[DoubleLongRightArrow]LSAMonTakeWeightedDocumentTermMatrix, "IDF", "None", "Cosine"];
matTopics = WeightTermsOfSSparseMatrix[lsaAllObj\[DoubleLongRightArrow]LSAMonNormalizeMatrixProduct[Normalized -> Left]\[DoubleLongRightArrow]LSAMonTakeW, "None", "None", "Cosine"];
smrObj = SMRMonUnit[]\[DoubleLongRightArrow]SMRMonCreate[<|"Topic" -> matTopics, "Pixel" -> matPixels|>];

Consider the character “團”:

aCImages["團"]
0pi2u9ejqv9wd

Here are the nearest neighbors for that character found by using both
image topics and image pixels:

(*focusItem=RandomChoice[Keys@aCImages];*)
  focusItem = {"團", "仼", "呔"}[[1]]; 
   smrObj\[DoubleLongRightArrow]
     SMRMonEcho[Style["Nearest neighbors by pixel topics:", Bold, Purple]]\[DoubleLongRightArrow]
     SMRMonSetTagTypeWeights[<|"Topic" -> 1, "Pixel" -> 0|>]\[DoubleLongRightArrow]
     SMRMonRecommend[focusItem, 8, "RemoveHistory" -> False]\[DoubleLongRightArrow]
     SMRMonEchoValue\[DoubleLongRightArrow]
     SMRMonEchoFunctionValue[AssociationThread[Values@KeyTake[aCImages, Keys[#]], Values[#]] &]\[DoubleLongRightArrow]
     SMRMonEcho[Style["Nearest neighbors by pixels:", Bold, Purple]]\[DoubleLongRightArrow]
     SMRMonSetTagTypeWeights[<|"Topic" -> 0, "Pixel" -> 1|>]\[DoubleLongRightArrow]
     SMRMonRecommend[focusItem, 8, "RemoveHistory" -> False]\[DoubleLongRightArrow]
     SMRMonEchoFunctionValue[AssociationThread[Values@KeyTake[aCImages, Keys[#]], Values[#]] &];
1l9yz2e8pvlyl
03bc668vzyh4v
00ecjkyzm4e2s
1wsyx76kjba1g
18wdi99m1k99j

Remark: Of course, in the recommender pipeline above
we can use both pixels and pixels topics. (With their contributions
being weighted.)

Graph clustering

In this section we demonstrate the use of graph communities to find
similar groups of Chinese characters.

Here we take a sub-matrix of the reduced dimension matrix computed
above:

W = lsaAllObj\[DoubleLongRightArrow]LSAMonNormalizeMatrixProduct[Normalized -> Right]\[DoubleLongRightArrow]LSAMonTakeW;

Here we find the similarity matrix between the characters and remove
entries corresponding to “small” similarities:

matSym = Clip[W . Transpose[W], {0.78, 1}, {0, 1}];

Here we plot the obtained (clipped) similarity matrix:

MatrixPlot[matSym]
1nvdb26265li6

Here we:

  • Take array rules of the sparse similarity matrix
  • Drop the rules corresponding to the diagonal elements
  • Convert the keys of rules into uni-directed graph edges
  • Make the corresponding graph
  • Find graph’s connected components
  • Show the number of connected components
  • Show a tally of the number of nodes in the components
gr = Graph[UndirectedEdge @@@ DeleteCases[Union[Sort /@ Keys[SSparseMatrixAssociation[matSym]]], {x_, x_}]];
lsComps = ConnectedComponents[gr];
Length[lsComps]
ReverseSortBy[Tally[Length /@ lsComps], First]

(*138*)

(*{{1839, 1}, {31, 1}, {27, 1}, {16, 1}, {11, 2}, {9, 2}, {8, 1}, {7, 1}, {6, 5}, {5, 3}, {4, 8}, {3, 14}, {2, 98}}*)

Here we demonstrate the clusters of Chinese characters make
sense:

aPrettyRules = Dispatch[Map[# -> Style[#, FontSize -> 36] &, Keys[aCImages]]]; CommunityGraphPlot[Subgraph[gr, TakeLargestBy[lsComps, Length, 10][[2]]], Method -> "SpringElectrical", VertexLabels -> Placed["Name", Above],AspectRatio -> 1, ImageSize -> 1000] /. aPrettyRules
1c0w4uhnyn2jx

Remark: By careful observation of the clusters and
graph connections we can convince ourselves that the similarities are
based on pictorial sub-elements (i.e. radicals) of the characters.

Hierarchical clustering

In this section we apply hierarchical clustering to the reduced
dimension representation of the Chinese character images.

Here we pick a cluster:

lsFocusIDs = lsClusters[[12]];
Magnify[ImageCollage[Values[KeyTake[aCImages, lsFocusIDs]]], 0.4]
14cnicsw2rvrt

Here is how we can make a dendrogram plot (not that useful here):

(*smat=W2\[LeftDoubleBracket]lsClusters\[LeftDoubleBracket]13\[RightDoubleBracket],All\[RightDoubleBracket];
Dendrogram[Thread[Normal[SparseArray[smat]]->Map[Style[#,FontSize->16]&,RowNames[smat]]],Top,DistanceFunction->EuclideanDistance]*)

Here is a heat-map plot with hierarchical clustering dendrogram (with
tool-tips):

gr = HeatmapPlot[W2[[lsFocusIDs, All]], DistanceFunction -> {CosineDistance, None}, Dendrogram -> {True, False}];
gr /. Map[# -> Tooltip[Style[#, FontSize -> 16], Style[#, Bold, FontSize -> 36]] &, lsFocusIDs]
0vz82un57054q

Remark: The plot above has tooltips with larger
character images.

Representing
all characters with smaller set of basic ones

In this section we demonstrate that a relatively small set of simpler
Chinese character images can be used to represent (or approxumate) the
rest of the images.

Remark: We use the following heuristic: the simpler
Chinese characters have the smallest amount of white pixels.

Obtain a training set of images – that are the darkest – and show a
sample of that set :

{trainingInds, testingInds} = TakeDrop[Keys[SortBy[aCImages, Total[ImageToVector[#]] &]], 800];
SeedRandom[3];
RandomSample[KeyTake[aCImages, trainingInds], 12]
10275rv8gn1qt

Show all training characters with an image collage:

Magnify[ImageCollage[Values[KeyTake[aCImages, trainingInds]], Background -> Gray, ImagePadding -> 1], 0.4]
049bs0w0x26jw

Apply LSA monadic pipeline with the training characters only:

SeedRandom[77];
AbsoluteTiming[
  lsaPartialObj = 
    LSAMonUnit[]\[DoubleLongRightArrow]
     LSAMonSetDocumentTermMatrix[SparseArray[Values@KeyTake[aCImageVecs, trainingInds]]]\[DoubleLongRightArrow]
     LSAMonApplyTermWeightFunctions["None", "None", "Cosine"]\[DoubleLongRightArrow]
     LSAMonExtractTopics["NumberOfTopics" -> 80, Method -> "SVD", "MaxSteps" -> 120, "MinNumberOfDocumentsPerTerm" -> 0]\[DoubleLongRightArrow]
     LSAMonNormalizeMatrixProduct[Normalized -> Right]\[DoubleLongRightArrow]
     LSAMonEcho[Style["Obtained basis:", Bold, Purple]]\[DoubleLongRightArrow]
     LSAMonEchoFunctionContext[ImageAdjust[Image[Partition[#, ImageDimensions[aCImages[[1]]][[1]]]]] & /@SparseArray[#H] &]; 
 ]
0i509m9n2d2p8
1raokwq750nyi
(*{0.826489, Null}*)

Get the matrix and basis interpretation of the extracted image
topics:

H = 
   lsaPartialObj\[DoubleLongRightArrow]
    LSAMonNormalizeMatrixProduct[Normalized -> Right]\[DoubleLongRightArrow]
    LSAMonTakeH;
lsBasis = ImageAdjust[Image[Partition[#, ImageDimensions[aCImages[[1]]][[1]]]]] & /@ SparseArray[H];

Approximation of “unseen”
characters

Pick a Chinese character image as a target image and pre-process
it:

ind = RandomChoice[testingInds];
imgTest = aCImages[ind];
matImageTest = ToSSparseMatrix[SparseArray@List@ImageToVector[imgTest, ImageDimensions[aCImages[[1]]]], "RowNames" -> Automatic, "ColumnNames" -> Automatic];
imgTest
15qkrj0nw08mv

Find its representation with the chosen feature extractor (LSAMon
object here):

matReprsentation = lsaPartialObj\[DoubleLongRightArrow]LSAMonRepresentByTopics[matImageTest]\[DoubleLongRightArrow]LSAMonTakeValue;
lsCoeff = Normal@SparseArray[matReprsentation[[1, All]]];
ListPlot[MapIndexed[Tooltip[#1, lsBasis[[#2[[1]]]]] &, lsCoeff], Filling -> Axis, PlotRange -> All]
0cn7ty6zf3mgo

Show representation coefficients outliers:

lsBasis[[OutlierPosition[Abs[lsCoeff], TopOutliers@*HampelIdentifierParameters]]]
1w6jkhdpxlxw8

Show the interpretation of the found representation:

vecReprsentation = lsCoeff . SparseArray[H];
reprImg = Image[Unitize@Clip[#, {0.45, 1}, {0, 1}] &@Rescale[Partition[vecReprsentation, ImageDimensions[aCImages[[1]]][[1]]]]];
{reprImg, imgTest}
0c84q1hscjubu

See the closest characters using image distances:

KeyMap[# /. aCImages &, TakeSmallest[ImageDistance[reprImg, #] & /@ aCImages, 4]]
1vtcw1dhzlet5

Remark: By applying the approximation procedure to
all characters in testing set we can convince ourselves that small,
training set provides good retrieval. (Not done here.)

Finding more interpretable
bases

In this section we show how to use LSA workflow with Non-Negative
Matrix Factorization (NNMF)
over an image set extended with already
extracted “topic” images.

Cleaner automatic radicals

aAutoRadicals2 = Map[Dilation[Binarize[DeleteSmallComponents[#]], 0.5] &, First /@ aAutoRadicals]
10eg2eaajgiit

Here we take an image union in order to remove the “duplicated”
radicals:

aAutoRadicals3 = AssociationThread[Range[Length[#]], #] &@Union[Values[aAutoRadicals2], SameTest -> (ImageDistance[#1, #2] < 14.5 &)]
1t09xi5nlycaw

LSAMon pipeline with NNMF

Make a matrix with named rows and columns from the image vectors:

mat1 = ToSSparseMatrix[SparseArray[Values@aCImageVecs], "RowNames" -> Keys[aCImageVecs], "ColumnNames" -> Automatic]
0np1umfcks9hm

Enhance the matrix with radicals instances:

mat2 = ToSSparseMatrix[SparseArray[Join @@ Map[Table[ImageToVector[#], 100] &, Values[aAutoRadicals3]]], "RowNames" -> Automatic, "ColumnNames" -> Automatic];
mat3 = RowBind[mat1, mat2];

Apply the LSAMon workflow pipeline with NNMF for topic
extraction:

SeedRandom[77];
AbsoluteTiming[
  lsaAllExtendedObj = 
    LSAMonUnit[]\[DoubleLongRightArrow]
     LSAMonSetDocumentTermMatrix[mat3]\[DoubleLongRightArrow]
     LSAMonApplyTermWeightFunctions["None", "None", "Cosine"]\[DoubleLongRightArrow]
     LSAMonExtractTopics["NumberOfTopics" -> 60, Method -> "NNMF", "MaxSteps" -> 15, "MinNumberOfDocumentsPerTerm" -> 0]\[DoubleLongRightArrow]
     LSAMonNormalizeMatrixProduct[Normalized -> Right]\[DoubleLongRightArrow]
     LSAMonEcho[Style["Obtained basis:", Bold, Purple]]\[DoubleLongRightArrow]
     LSAMonEchoFunctionContext[ImageAdjust[Image[Partition[#, ImageDimensions[aCImages[[1]]][[1]]]]] & /@SparseArray[#H] &]; 
 ]
1mc1fa16ylzcu
1c6p7pzemk6qx
(*{155.289, Null}*)

Remark: Note that NNMF “found” the interpretable
radical images we enhanced the original image set with.

Get the matrix and basis interpretation of the extracted image
topics:

H = 
   lsaAllExtendedObj\[DoubleLongRightArrow]
    LSAMonNormalizeMatrixProduct[Normalized -> Right]\[DoubleLongRightArrow]
    LSAMonTakeH;
lsBasis = ImageAdjust[Image[Partition[#, ImageDimensions[aCImages[[1]]][[1]]]]] & /@ SparseArray[H];

Approximation

Pick a Chinese character image as a target image and pre-process
it:

SeedRandom[43];
ind = RandomChoice[testingInds];
imgTest = aCImages[ind];
matImageTest = ToSSparseMatrix[SparseArray@List@ImageToVector[imgTest, ImageDimensions[aCImages[[1]]]], "RowNames" -> Automatic, "ColumnNames" -> Automatic];
imgTest
1h2aitm71mnl5

Find its representation with the chosen feature extractor (LSAMon
object here):

matReprsentation = lsaAllExtendedObj\[DoubleLongRightArrow]LSAMonRepresentByTopics[matImageTest]\[DoubleLongRightArrow]LSAMonTakeValue;
lsCoeff = Normal@SparseArray[matReprsentation[[1, All]]];
ListPlot[MapIndexed[Tooltip[#1, lsBasis[[#2[[1]]]]] &, lsCoeff], Filling -> Axis, PlotRange -> All]
084vbifk2zvi3

Show representation coefficients outliers:

lsBasis[[OutlierPosition[Abs[lsCoeff], TopOutliers@*QuartileIdentifierParameters]]]
06xq4p3k31fzt

Remark: Note that expected
radical images are in the outliers.

Show the interpretation of the found representation:

vecReprsentation = lsCoeff . SparseArray[H];
reprImg = Image[Unitize@Clip[#, {0.45, 1}, {0, 1}] &@Rescale[Partition[vecReprsentation, ImageDimensions[aCImages[[1]]][[1]]]]];
{reprImg, imgTest}
01xeidbc9qme6

See the closest characters using image distances:

KeyMap[# /. aCImages &, TakeSmallest[ImageDistance[reprImg, #] & /@ aCImages, 4]]
1mrut9izhycrn

Setup

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicLatentSemanticAnalysis.m"];
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicSparseMatrixRecommender.m"];
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/Misc/HeatmapPlot.m"]

References

[SH1] Silvia Hao, “Exploring
structure of Chinese characters through image processing”
, (2022),
Wolfram Community.

[AA1] Anton Antonov, “A monad for
Latent Semantic Analysis workflows”
, (2019), Wolfram Community.

[AA2] Anton Antonov, “LSA methods
comparison over random mandalas deconstruction – WL”
, (2022), Wolfram Community.

[AA3] Anton Antonov, “Bethlehem
stars: classifying randomly generated mandalas”
, (2020), Wolfram Community.

[AA4] Anton Antonov, “Random mandalas deconstruction in R, Python, and Mathematica”, (2022), MathematicaForPrediction at WordPress.

[AAp1] Anton Antonov, LSAMon
for Image Collections Mathematica package
, (2022), MathematicaForPrediction
at GitHub
.

How to simplify Machine learning workflows specifications? (useR! 2020)

Introduction

This blog post is with the slides of my lighting talk for the useR! 2020 Conference, St. Louis, USA.

Here is the video recording:

How to simplify Machine Learning workflows specifications?

useR! 2020 Conference, lightning talk

Anton Antonov
Senior Research Scientist
Accendo Data LLC
https://github.com/antononcube

What is this about?

Rapid specification of Machine Learning (ML) workflows using natural language commands.

The easiest things to automate with ML are ML workflows.

This presentation demonstrates that with natural language interfaces to ML algorithms.

Motivation

Assume that:

  1. We want to create conversation agents that help Data Science (DS) and ML practitioners to quickly create first, initial versions of different DS and ML workflows for different programming languages and related packages.
  2. We expect that the initial versions of programming code are tweaked further. (In order to produce desired outcomes in the application area of interest.)

The workflows considered

In this presentation we focus on these three ML areas:

  • Quantile Regression (QR)
  • Latent Semantic Analysis (LSA)
  • Recommendations

First example data

Assume we have a data frame with temperature data. Here is a summary:

summary(dfTemperatureData)
##       Time            Temperature   
##  Min.   :3.629e+09   Min.   : 4.72  
##  1st Qu.:3.661e+09   1st Qu.:19.67  
##  Median :3.692e+09   Median :23.33  
##  Mean   :3.692e+09   Mean   :22.28  
##  3rd Qu.:3.724e+09   3rd Qu.:26.11  
##  Max.   :3.755e+09   Max.   :31.17
ggplot2::ggplot(dfTemperatureData) + ggplot2::geom_point( ggplot2::aes( x = Time, y = Temperature ) )

Quantile Regression workflow: first example

Here is a Quantile Regression (QR) workflow specification:

qrmon2 <- 
  eval( expr = to_QRMon_R_command( 
    "create from dfTemperatureData;
     compute quantile regression with 12 knots and probabilities 0.25, 0.5, and 0.75;
     show date list plot with date origin 1900-01-01;", parse = TRUE) )

How it is done?

For a given ML domain (like QR or LSA) we create two types of Domain Specific Languages (DSL’s):

  1. a software monad (i.e. programming language pipeline package) and
  2. a DSL that is a subset of a spoken language.

These two DSL’s are combined: the latter translates natural language commands into the former.

By executing those translations we interpret commands of spoken DSL’s into ML computational results.

Note, that we assume that there is a separate system that converts speech into text.

Development cycle

Here is a clarification diagram:

MonadicMakingOfMLConversationalAgents
MonadicMakingOfMLConversationalAgents

Grammars and parsers

For each type of workflow is developed a specialized DSL translation Raku module.

Each Raku module:

  1. Has grammars for parsing a sequence of natural commands of a certain DSL
  2. Translates the parsing results into corresponding software monad code

Different programming languages and packages can be the targets of the DSL translation.

(At this point are implemented DSL-translators to Python, R, and Wolfram Language.)

Here is an example grammar.

Quantile Regression workflow: translation

Here is how a sequence of natural commands that specifies a QR workflow is translated into code for a QR software monad:

qrExpr <- 
  to_QRMon_R_command( 
    "create from dfTemperatureData;
     compute quantile regression with knots 12 and probabilities 0.05, 0.95;
     find outliers;", parse = FALSE )
qrExpr
## [1] "QRMonUnit( data = dfTemperatureData) %>%"                            
## [2] "QRMonQuantileRegression(df = 12, probabilities = c(0.05, 0.95)) %>%"
## [3] "QRMonOutliers() %>% QRMonOutliersPlot()"

Quantile Regression workflow: evaluation

Here we evaluate the generated QR monad code:

qrmon2 <- eval(expr = parse( text = paste(qrExpr)))
## Warning in QRMonSetData(res, data): The argument data is expected to be a data frame with columns: { Regressor, Value }.
## Warning in QRMonSetData(res, data): Proceeding by renaming the first columm "Time" as "Regressor" and renaming the second columm "Temperature" as "Value".

Latent Semantic Analysis workflow: translation

Here is how a sequence of natural commands that specifies a LSA workflow is translated into code for a LSA software monad:

lsaExpr <- 
  to_LSAMon_R_command( 
    "create from textHamlet;
     make document term matrix with automatic stop words and without stemming;
     apply lsi functions global weight function idf, local term weight function none, normalizer function cosine;
     extract 12 topics using method SVD, max steps 120, and min number of documents per term 2;
     show thesaurus table for ghost and grave;", parse = FALSE  )
lsaExpr
## [1] "LSAMonUnit(textHamlet) %>%"                                                                                                         
## [2] "LSAMonMakeDocumentTermMatrix( stopWords = NULL, stemWordsQ = FALSE) %>%"                                                            
## [3] "LSAMonApplyTermWeightFunctions(globalWeightFunction = \"IDF\", localWeightFunction = \"None\", normalizerFunction = \"Cosine\") %>%"
## [4] "LSAMonExtractTopics( numberOfTopics = 12, method = \"SVD\",  maxSteps = 120, minNumberOfDocumentsPerTerm = 2) %>%"                  
## [5] "LSAMonEchoStatisticalThesaurus( words = c(\"ghost\", \"grave\"))"

Latent Semantic Analysis workflow: evaluation

Here we execute the generated LSA monad code:

lsamon2 <- eval(expr = parse( text = paste(lsaExpr)))
## Warning in NonNegativeMatrixFactorization::NearestWords(lsaObj$H, word, : More that one column name corresponds to the search word; the first match is used.
##    SearchTerm Word.Distance Word.Index Word.Word
## 1       ghost  0.0000000000        623     ghost
## 2       ghost  2.7104582353         27     again
## 3       ghost  3.0323549513        513    father
## 4       ghost  3.1750339154       1465     stage
## 5       ghost  3.2285642571       1644     under
## 6       ghost  3.2393248725        319     cries
## 7       ghost  3.3826537734       1312         s
## 8       ghost  3.4975768169       1512     swear
## 9       ghost  3.5204077364        935       mar
## 10      ghost  3.5868377279       1550      thee
## 11      ghost  3.5869119178        744       hor
## 12      ghost  3.5901934408       1587       thy
## 13      grave  0.0000000000        648     grave
## 14      grave  0.0002598782       1597    tongue
## 15      grave  0.0002828355        151    better
## 16      grave  0.0002891626       1317      said
## 17      grave  0.0003122740        741    honour
## 18      grave  0.0003327156       1419     sleep
## 19      grave  0.0003395627        897      long
## 20      grave  0.0003459771         60       any
## 21      grave  0.0004090686       1251    reason
## 22      grave  0.0004264220         58    answer
## 23      grave  0.0004381933        643     grace
## 24      grave  0.0004560758        429      each

Recommender workflow: data

Consider the making of a recommender over the Titanic data:

dfTitanic[sample(1:nrow(dfTitanic), 12), ]
##           id passengerClass passengerAge passengerSex passengerSurvival
## 1225 id.1225            3rd           20         male              died
## 443   id.443            2nd           20         male              died
## 761   id.761            3rd           30         male          survived
## 835   id.835            3rd           30         male              died
## 706   id.706            3rd           -1         male              died
## 339   id.339            2nd           30         male              died
## 515   id.515            2nd            0         male          survived
## 10     id.10            1st           70         male              died
## 579   id.579            2nd           30         male              died
## 1248 id.1248            3rd           -1       female          survived
## 673   id.673            3rd           -1         male              died
## 1205 id.1205            3rd           20         male              died
summary(as.data.frame(unclass(dfTitanic), stringsAsFactors = TRUE))
##        id       passengerClass  passengerAge   passengerSex passengerSurvival
##  id.1   :   1   1st:323        Min.   :-1.00   female:466   died    :809     
##  id.10  :   1   2nd:277        1st Qu.:10.00   male  :843   survived:500     
##  id.100 :   1   3rd:709        Median :20.00                                 
##  id.1000:   1                  Mean   :23.55                                 
##  id.1001:   1                  3rd Qu.:40.00                                 
##  id.1002:   1                  Max.   :80.00                                 
##  (Other):1303

Recommender workflow

Here are recommender workflow specification and evaluation results:

smrmon2 <- 
  eval( expr = to_SMRMon_R_command( 
    "create from dfTitanic; 
     apply the LSI functions inverse document frequency, term frequency, and cosine;
     compute the top 6 recommendations for the profile female=1, 30=1; 
     extend recommendations with dfTitanic;
     show pipeline value", parse = TRUE ) )
##   Score Index      id passengerClass passengerAge passengerSex passengerSurvival
## 1     2     1    id.1            1st           30       female          survived
## 2     2    33 id.1027            3rd           30       female          survived
## 3     2    68 id.1059            3rd           30       female              died
## 4     2    72 id.1062            3rd           30       female          survived
## 5     2    99 id.1087            3rd           30       female              died
## 6     2   108 id.1095            3rd           30       female          survived

Handling misspellings

The approach taken in the design and implementation of the natural language commands interpreters can handle misspellings:

smrmon2 <- 
  eval( expr = to_SMRMon_R_command( 
    "create from dfTitanic; 
     aply the LSI functions inverse document frequency, term frequency, and cosine;
     compute the top 6 recomendations for the profle female=1, 30=1; 
     extend recommendations with dfTitanic;
     show pipeline value" ) )
## [1] "Possible misspelling of 'apply' as 'aply'."                     
## [2] "Possible misspelling of 'recommendations' as 'recomendations'."
## [3] "Possible misspelling of 'profile' as 'profle'."                
##   Score Index      id passengerClass passengerAge passengerSex passengerSurvival
## 1     2     1    id.1            1st           30       female          survived
## 2     2    33 id.1027            3rd           30       female          survived
## 3     2    68 id.1059            3rd           30       female              died
## 4     2    72 id.1062            3rd           30       female          survived
## 5     2    99 id.1087            3rd           30       female              died
## 6     2   108 id.1095            3rd           30       female          survived

Not just ML workflows

Obviously this approach can be used for any type of computational workflows.

Here is an example of an Epidemiology Modeling workflow:

ecmCommands <- 
'create with the model susceptible exposed infected two hospitalized recovered;
 assign 100000 to the susceptible population;
 set infected normally symptomatic population to be 0;
 set infected severely symptomatic population to be 1;
 assign 0.56 to contact rate of infected normally symptomatic population;
 assign 0.58 to contact rate of infected severely symptomatic population;
 assign 0.1 to contact rate of the hospitalized population;
 simulate for 240 days;
 plot populations results;'
to_ECMMon_R_command(ecmCommands, parse = TRUE)
## expression(
##     ECMMonUnit(model = SEI2HRModel()) %>% 
##     ECMMonAssignInitialConditions(initConds = c(SPt = 1e+05)) %>% 
##     ECMMonAssignInitialConditions(initConds = c(INSPt = 0)) %>% 
##     ECMMonAssignInitialConditions(initConds = c(ISSPt = 1)) %>% 
##     ECMMonAssignRateValues(rateValues = c(contactRateINSP = 0.56)) %>% 
##     ECMMonAssignRateValues(rateValues = c(contactRateISSP = 0.58)) %>% 
##     ECMMonAssignRateValues(rateValues = c(contactRateHP = 0.1)) %>% 
##     ECMMonSimulate(maxTime = 240) %>% ECMMonPlotSolutions(stocksSpec = ".*Population")
## )
ecmmon2 <- eval( to_ECMMon_R_command(ecmCommands) )

References

Repositories

[AAr1] Anton Antonov, R-packages, (2019), GitHub.

[AAr2] Anton Antonov, Conversational Agents, (2017), GitHub.

Packages

[AAp1] Anton Antonov, Quantle Regression Monad in R, (2019), GitHub.

[AAp2] Anton Antonov, Latent Semantic Analysis Monad in R, (2019), R-packages at GitHub.

[AAp3] Anton Antonov, Sparse Matrix Recommender Monad in R, (2019), R-packages at GitHub.

[AAp4] Anton Antonov, Epidemiology Compartmental Modeling Monad in R, (2020), GitHub.

The Great conversation in USA presidential speeches

Introduction

This document shows a way to chart in Mathematica / WL the evolution of topics in collections of texts. The making of this document (and related code) is primarily motivated by the fascinating concept of the Great Conversation, [Wk1, MA1]. In brief, all western civilization books are based on 103 great ideas; if we find the great ideas each significant book is based on we can construct a time-line (spanning centuries) of the great conversation between the authors; see [MA1, MA2, MA3].

Instead of finding the great ideas in a text collection we extract topics statistically, using dimension reduction with Non-Negative Matrix Factorization (NNMF), [AAp3, AA1, AA2].

The presented computational results are based on the text collections of State of the Union speeches of USA presidents [D2]. The code in this document can be easily configured to use the much smaller text collection [D1] available online and in Mathematica/WL. (The collection [D1] is fairly small, 51 documents; the collection [D2] is much larger, 2453 documents.)

The procedures (and code) described in this document, of course, work on other types of text collections. For example: movie reviews, podcasts, editorial articles of a magazine, etc.

A secondary objective of this document is to illustrate the use of the monadic programming pipeline as a Software design pattern, [AA3]. In order to make the code concise in this document I wrote the package MonadicLatentSemanticAnalysis.m, [AAp5]. Compare with the code given in [AA1].

The very first version of this document was written for the 2017 summer course “Data Science for the Humanities” at the University of Oxford, UK.

Outline of the procedure applied

The procedure described in this document has the following steps.

  1. Get a collection of documents with known dates of publishing.
    • Or other types of tags associated with the documents.
  2. Do preliminary analysis of the document collection.
    • Number of documents; number of unique words.

    • Number of words per document; number of documents per word.

    • (Some of the statistics of this step are done easier after the Linear vector space representation step.)

  3. Optionally perform Natural Language Processing (NLP) tasks.

    1. Obtain or derive stop words.

    2. Remove stop words from the texts.

    3. Apply stemming to the words in the texts.

  4. Linear vector space representation.

    • This means that we represent the collection with a document-word matrix.

    • Each unique word is a basis vector in that space.

    • For each document the corresponding point in that space is derived from the number of appearances of document’s words.

  5. Extract topics.

    • In this document NNMF is used.

    • In order to obtain better results with NNMF some experimentation and refinements of the topics search have to be done.

  6. Map the documents over the extracted topics.

    • The original matrix of the vector space representation is replaced with a matrix with columns representing topics (instead of words.)
  7. Order the topics according to their presence across the years (or other related tags).
    • This can be done with hierarchical clustering.

    • Alternatively,

      1. for a given topic find the weighted mean of the years of the documents that have that topic, and

      2. order the topics according to those mean values.

  8. Visualize the evolution of the documents according to their topics.

    1. This can be done by simply finding the contingency matrix year vs topic.

    2. For the president speeches we can use the president names for time-line temporal axis instead of years.

      • Because the corresponding time intervals of president office occupation do not overlap.

Remark: Some of the functions used in this document combine several steps into one function call (with corresponding parameters.)

Packages

This loads the packages [AAp1-AAp8]:

Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicLatentSemanticAnalysis.m"];
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/Misc/HeatmapPlot.m"];
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/Misc/RSparseMatrix.m"];

(Note that some of the packages that are imported automatically by [AAp5].)

The functions of the central package in this document, [AAp5], have the prefix “LSAMon”. Here is a sample of those names:

Short@Names["LSAMon*"]

(* {"LSAMon", "LSAMonAddToContext", "LSAMonApplyTermWeightFunctions", <>, "LSAMonUnit", "LSAMonUnitQ", "LSAMonWhen"} *)

Data load

In this section we load a text collection from a specified source.

The text collection from “Presidential Nomination Acceptance Speeches”, [D1], is small and can be used for multiple code verifications and re-runnings. The “State of Union addresses of USA presidents” text collection from [D2] was converted to a Mathematica/WL object by Christopher Wolfram (and sent to me in a private communication.) The text collection [D2] provides far more interesting results (and they are shown below.)

If[True,
  speeches = ResourceData[ResourceObject["Presidential Nomination Acceptance Speeches"]];
  names = StringSplit[Normal[speeches[[All, "Person"]]][[All, 2]], "::"][[All, 1]],

  (*ELSE*)
  (*State of the union addresses provided by Christopher Wolfram. *)      
  Get["~/MathFiles/Digital humanities/Presidential speeches/speeches.mx"];
  names = Normal[speeches[[All, "Name"]]];
];

dates = Normal[speeches[[All, "Date"]]];
texts = Normal[speeches[[All, "Text"]]];

Dimensions[speeches]

(* {2453, 4} *)

Basic statistics for the texts

Using different contingency matrices we can derive basic statistical information about the document collection. (The document-word matrix is a contingency matrix.)

First we convert the text data in long-form:

docWordRecords = 
  Join @@ MapThread[
    Thread[{##}] &, {Range@Length@texts, names, 
     DateString[#, {"Year"}] & /@ dates, 
     DeleteStopwords@*TextWords /@ ToLowerCase[texts]}, 1];

Here is a sample of the rows of the long-form:

GridTableForm[RandomSample[docWordRecords, 6], 
 TableHeadings -> {"document index", "name", "year", "word"}]

Here is a summary:

Multicolumn[
 RecordsSummary[docWordRecords, {"document index", "name", "year", "word"}, "MaxTallies" -> 8], 4, Dividers -> All, Alignment -> Top]

Using the long form we can compute the document-word matrix:

ctMat = CrossTabulate[docWordRecords[[All, {1, -1}]]];
MatrixPlot[Transpose@Sort@Map[# &, Transpose[ctMat@"XTABMatrix"]], 
 MaxPlotPoints -> 300, ImageSize -> 800, 
 AspectRatio -> 1/3]

Here is the president-word matrix:

ctMat = CrossTabulate[docWordRecords[[All, {2, -1}]]];
MatrixPlot[Transpose@Sort@Map[# &, Transpose[ctMat@"XTABMatrix"]], MaxPlotPoints -> 300, ImageSize -> 800, AspectRatio -> 1/3]

Here is an alternative way to compute text collection statistics through the document-word matrix computed within the monad LSAMon:

LSAMonUnit[texts]⟹LSAMonEchoTextCollectionStatistics[];

Procedure application

Stop words

Here is one way to obtain stop words:

stopWords = Complement[DictionaryLookup["*"], DeleteStopwords[DictionaryLookup["*"]]];
Length[stopWords]
RandomSample[stopWords, 12]

(* 304 *)

(* {"has", "almost", "next", "WHO", "seeming", "together", "rather", "runners-up", "there's", "across", "cannot", "me"} *)

We can complete this list with additional stop words derived from the collection itself. (Not done here.)

Linear vector space representation and dimension reduction

Remark: In the rest of the document we use “term” to mean “word” or “stemmed word”.

The following code makes a document-term matrix from the document collection, exaggerates the representations of the terms using “TF-IDF”, and then does topic extraction through dimension reduction. The dimension reduction is done with NNMF; see [AAp3, AA1, AA2].

SeedRandom[312]

mObj =
  LSAMonUnit[texts]⟹
   LSAMonMakeDocumentTermMatrix[{}, stopWords]⟹
   LSAMonApplyTermWeightFunctions[]⟹
   LSAMonTopicExtraction[Max[5, Ceiling[Length[texts]/100]], 60, 12, "MaxSteps" -> 6, "PrintProfilingInfo" -> True];

This table shows the pipeline commands above with comments:

Detailed description

The monad object mObj has a context of named values that is an Association with the following keys:

Keys[mObj⟹LSAMonTakeContext]

(* {"texts", "docTermMat", "terms", "wDocTermMat", "W", "H", "topicColumnPositions", "automaticTopicNames"} *)

Let us clarify the values by briefly describing the computational steps.

  1. From texts we derive the document-term matrix \text{docTermMat}\in \mathbb{R}^{m \times n}, where n is the number of documents and m is the number of terms.
    • The terms are words or stemmed words.

    • This is done with LSAMonMakeDocumentTermMatrix.

  2. From docTermMat is derived the (weighted) matrix wDocTermMat using “TF-IDF”.

    • This is done with LSAMonApplyTermWeightFunctions.
  3. Using docTermMat we find the terms that are present in sufficiently large number of documents and their column indices are assigned to topicColumnPositions.

  4. Matrix factorization.

    1. Assign to \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]], \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]]\in \mathbb{R}^{m_1 \times n}, where m_1 = |topicsColumnPositions|.

    2. Compute using NNMF the factorization \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]]\approx H W, where W\in \mathbb{R}^{k \times n}, H\in \mathbb{R}^{k \times m_1}, and k is the number of topics.

    3. The values for the keys “W, “H”, and “topicColumnPositions” are computed and assigned by LSAMonTopicExtraction.

  5. From the top terms of each topic are derived automatic topic names and assigned to the key automaticTopicNames in the monad context.

    • Also done by LSAMonTopicExtraction.

Statistical thesaurus

At this point in the object mObj we have the factors of NNMF. Using those factors we can find a statistical thesaurus for a given set of words. The following code calculates such a thesaurus, and echoes it in a tabulated form.

queryWords = {"arms", "banking", "economy", "education", "freedom", 
   "tariff", "welfare", "disarmament", "health", "police"};

mObj⟹
  LSAMonStatisticalThesaurus[queryWords, 12]⟹
  LSAMonEchoStatisticalThesaurus[];

By observing the thesaurus entries we can see that the words in each entry are semantically related.

Note, that the word “welfare” strongly associates with “[applause]”. The rest of the query words do not, which can be seen by examining larger thesaurus entries:

thRes =
  mObj⟹
   LSAMonStatisticalThesaurus[queryWords, 100]⟹
   LSAMonTakeValue;
Cases[thRes, "[applause]", Infinity]

(* {"[applause]", "[applause]"} *)

The second “[applause]” associated word is “education”.

Detailed description

The statistical thesaurus is computed by using the NNMF’s right factor H.

For a given term, its corresponding column in H is found and the nearest neighbors of that column are found in the space \mathbb{R}^{m_1} using Euclidean norm.

Extracted topics

The topics are the rows of the right factor H of the factorization obtained with NNMF .

Let us tabulate the topics found above with LSAMonTopicExtraction :

mObj⟹ LSAMonEchoTopicsTable["NumberOfTerms" -> 6, "MagnificationFactor" -> 0.8, Appearance -> "Horizontal"];

Map documents over the topics

The function LSAMonTopicsRepresentation finds the top outliers for each row of NNMF’s left factor W. (The outliers are found using the package [AAp4].) The obtained list of indices gives the topic representation of the collection of texts.

Short@(mObj⟹LSAMonTopicsRepresentation[]⟹LSAMonTakeContext)["docTopicIndices"]

{{53}, {47, 53}, {25}, {46}, {44}, {15, 42}, {18}, <>, {30}, {33}, {7, 60}, {22, 25}, {12, 13, 25, 30, 49, 59}, {48, 57}, {14, 41}}

Further we can see that if the documents have tags associated with them — like author names or dates — we can make a contingency matrix of tags vs topics. (See [AAp8, AA4].) This is also done by the function LSAMonTopicsRepresentation that takes tags as an argument. If the tags argument is Automatic, then the tags are simply the document indices.

Here is a an example:

rsmat = mObj⟹LSAMonTopicsRepresentation[Automatic]⟹LSAMonTakeValue;
MatrixPlot[rsmat]

Here is an example of calling the function LSAMonTopicsRepresentation with arbitrary tags.

rsmat = mObj⟹LSAMonTopicsRepresentation[DateString[#, "MonthName"] & /@ dates]⟹LSAMonTakeValue;
MatrixPlot[rsmat]

Note that the matrix plots above are very close to the charting of the Great conversation that we are looking for. This can be made more obvious by observing the row names and columns names in the tabulation of the transposed matrix rsmat:

Magnify[#, 0.6] &@MatrixForm[Transpose[rsmat]]

Charting the great conversation

In this section we show several ways to chart the Great Conversation in the collection of speeches.

There are several possible ways to make the chart: using a time-line plot, using heat-map plot, and using appropriate tabulation (with MatrixForm or Grid).

In order to make the code in this section more concise the package RSparseMatrix.m, [AAp7, AA5], is used.

Topic name to topic words

This command makes an Association between the topic names and the top topic words.

aTopicNameToTopicTable = 
  AssociationThread[(mObj⟹LSAMonTakeContext)["automaticTopicNames"], 
   mObj⟹LSAMonTopicsTable["NumberOfTerms" -> 12]⟹LSAMonTakeValue];

Here is a sample:

Magnify[#, 0.7] &@ aTopicNameToTopicTable[[1 ;; 3]]

Time-line plot

This command makes a contingency matrix between the documents and the topics (as described above):

rsmat = ToRSparseMatrix[mObj⟹LSAMonTopicsRepresentation[Automatic]⟹LSAMonTakeValue]

This time-plot shows great conversation in the USA presidents state of union speeches:

TimelinePlot[
 Association@
  MapThread[
   Tooltip[#2, aTopicNameToTopicTable[#2]] -> dates[[ToExpression@#1]] &, 
   Transpose[RSparseMatrixToTriplets[rsmat]]], 
 PlotTheme -> "Detailed", ImageSize -> 1000, AspectRatio -> 1/2, PlotLayout -> "Stacked"]

The plot is too cluttered, so it is a good idea to investigate other visualizations.

Topic vs president heatmap

We can use the USA president names instead of years in the Great Conversation chart because the USA presidents terms do not overlap.

This makes a contingency matrix presidents vs topics:

rsmat2 = ToRSparseMatrix[
   mObj⟹LSAMonTopicsRepresentation[
     names]⟹LSAMonTakeValue];

Here we compute the chronological order of the presidents based on the dates of their speeches:

nameToMeanYearRules = 
  Map[#[[1, 1]] -> Mean[N@#[[All, 2]]] &, 
   GatherBy[MapThread[List, {names, ToExpression[DateString[#, "Year"]] & /@ dates}], First]];
ordRowInds = Ordering[RowNames[rsmat2] /. nameToMeanYearRules];

This heat-map plot uses the (experimental) package HeatmapPlot.m, [AAp6]:

Block[{m = rsmat2[[ordRowInds, All]]},
 HeatmapPlot[SparseArray[m], RowNames[m], 
  Thread[Tooltip[ColumnNames[m], aTopicNameToTopicTable /@ ColumnNames[m]]],
  DistanceFunction -> {None, Sort}, ImageSize -> 1000, 
  AspectRatio -> 1/2]
 ]

Note the value of the option DistanceFunction: there is not re-ordering of the rows and columns are reordered by sorting. Also, the topics on the horizontal names have tool-tips.

References

Text data

[D1] Wolfram Data Repository, "Presidential Nomination Acceptance Speeches".

[D2] US Presidents, State of the Union Addresses, Trajectory, 2016. ‪ISBN‬1681240009, 9781681240008‬.

[D3] Gerhard Peters, "Presidential Nomination Acceptance Speeches and Letters, 1880-2016", The American Presidency Project.

[D4] Gerhard Peters, "State of the Union Addresses and Messages", The American Presidency Project.

Packages

[AAp1] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub.

[AAp2] Anton Antonov, Implementation of document-term matrix construction and re-weighting functions in Mathematica(2013), MathematicaForPrediction at GitHub.

[AAp3] Anton Antonov, Implementation of the Non-Negative Matrix Factorization algorithm in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp4] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp5] Anton Antonov, Monadic latent semantic analysis Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp6] Anton Antonov, Heatmap plot Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp7] Anton Antonov, RSparseMatrix Mathematica package, (2015), MathematicaForPrediction at GitHub.

[AAp8] Anton Antonov, Cross tabulation implementation in Mathematica, (2017), MathematicaForPrediction at GitHub.

Books and articles

[AA1] Anton Antonov, "Topic and thesaurus extraction from a document collection", (2013), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, "Statistical thesaurus from NPR podcasts", (2013), MathematicaForPrediction at WordPress blog.

[AA3] Anton Antonov, "Monad code generation and extension", (2017), MathematicaForPrediction at GitHub.

[AA4] Anton Antonov, "Contingency tables creation examples", (2016), MathematicaForPrediction at WordPress blog.

[AA5] Anton Antonov, "RSparseMatrix for sparse matrices with named rows and columns", (2015), MathematicaForPrediction at WordPress blog.

[Wk1] Wikipedia entry, Great Conversation.

[MA1] Mortimer Adler, "The Great Conversation Revisited," in The Great Conversation: A Peoples Guide to Great Books of the Western World, Encyclopædia Britannica, Inc., Chicago,1990, p. 28.

[MA2] Mortimer Adler, "Great Ideas".

[MA3] Mortimer Adler, "How to Think About the Great Ideas: From the Great Books of Western Civilization", 2000, Open Court.

Natural language processing with functional parsers

Natural language Processing (NLP) can be done with a structural approach using grammar rules. (The other type of NLP is using statistical methods.) In this post I discuss the use of functional parsers for the parsing and interpretation of small sets of natural language sentences within specific contexts. Functional parsing is also known as monadic parsing and parsing combinators.

Generally, I am using functional parsers to make Domain-Specific Languages (DSL’s). I use DSL’s to make command interfaces to search and recommendation engines and also to design and prototype conversational engines. I use extensively the so called Backus-Naur Form (BNF) for the grammar specifications. Clearly a DSL can be very close to natural language and provide sufficient means for interpretation within a given (narrow) context. (Like function integration in Calculus, smart phone directory browsing and selection, or search for something to eat nearby.)

I implemented and uploaded a package for construction of functional parsers: see FunctionalParsers.m hosted by the project MathematicaForPrediction at GitHub.

The package provides ability to quickly program parsers using a core system of functional parsers as described in the article “Functional parsers” by Jeroen Fokker .

The parsers (in both the package and the article) are categorized in the groups: basic, combinators, and transformers. Immediate interpretation can be done with transformer parsers, but the package also provides functions for evaluation of parser output within a context of data and functions.

Probably most importantly, the package provides functions for automatic generation of parsers from grammars in EBNF.

Here is an example of parsing the sentences of an integration requests language:

Interpretation of integration requests

Here is the grammar:
Integration requests EBNF grammar

The grammar can be visualized with this mind map:

Integration command

The mind map was hand made with MindNode Pro. Generally, the branches represent alternatives, but if two branches are connected the direction of the arrow connecting them shows a sequence combination.

With the FunctionalParsers.m package we can automatically generate a
mind map parsing the string of the grammar in EBNF to OMPL:

IntegrationRequestsGenerated

(And here is a PDF of the automatically generated mind map: IntegrationRequestsGenerated . )

I also made a slide show that gives an introduction to how the package is used: “Functional parsers for an integration requests language grammar”.

A more complicated example is this conversational engine for manipulation of time series data. (Data loading, finding outliers and trends. More details in the next blog post.)