The Great conversation in USA presidential speeches


This document shows a way to chart in Mathematica / WL the evolution of topics in collections of texts. The making of this document (and related code) is primarily motivated by the fascinating concept of the Great Conversation, [Wk1, MA1]. In brief, all western civilization books are based on 103 great ideas; if we find the great ideas each significant book is based on we can construct a time-line (spanning centuries) of the great conversation between the authors; see [MA1, MA2, MA3].

Instead of finding the great ideas in a text collection we extract topics statistically, using dimension reduction with Non-Negative Matrix Factorization (NNMF), [AAp3, AA1, AA2].

The presented computational results are based on the text collections of State of the Union speeches of USA presidents [D2]. The code in this document can be easily configured to use the much smaller text collection [D1] available online and in Mathematica/WL. (The collection [D1] is fairly small, 51 documents; the collection [D2] is much larger, 2453 documents.)

The procedures (and code) described in this document, of course, work on other types of text collections. For example: movie reviews, podcasts, editorial articles of a magazine, etc.

A secondary objective of this document is to illustrate the use of the monadic programming pipeline as a Software design pattern, [AA3]. In order to make the code concise in this document I wrote the package MonadicLatentSemanticAnalysis.m, [AAp5]. Compare with the code given in [AA1].

The very first version of this document was written for the 2017 summer course “Data Science for the Humanities” at the University of Oxford, UK.

Outline of the procedure applied

The procedure described in this document has the following steps.

  1. Get a collection of documents with known dates of publishing.
    • Or other types of tags associated with the documents.
  2. Do preliminary analysis of the document collection.
    • Number of documents; number of unique words.

    • Number of words per document; number of documents per word.

    • (Some of the statistics of this step are done easier after the Linear vector space representation step.)

  3. Optionally perform Natural Language Processing (NLP) tasks.

    1. Obtain or derive stop words.

    2. Remove stop words from the texts.

    3. Apply stemming to the words in the texts.

  4. Linear vector space representation.

    • This means that we represent the collection with a document-word matrix.

    • Each unique word is a basis vector in that space.

    • For each document the corresponding point in that space is derived from the number of appearances of document’s words.

  5. Extract topics.

    • In this document NNMF is used.

    • In order to obtain better results with NNMF some experimentation and refinements of the topics search have to be done.

  6. Map the documents over the extracted topics.

    • The original matrix of the vector space representation is replaced with a matrix with columns representing topics (instead of words.)
  7. Order the topics according to their presence across the years (or other related tags).
    • This can be done with hierarchical clustering.

    • Alternatively,

      1. for a given topic find the weighted mean of the years of the documents that have that topic, and

      2. order the topics according to those mean values.

  8. Visualize the evolution of the documents according to their topics.

    1. This can be done by simply finding the contingency matrix year vs topic.

    2. For the president speeches we can use the president names for time-line temporal axis instead of years.

      • Because the corresponding time intervals of president office occupation do not overlap.

Remark: Some of the functions used in this document combine several steps into one function call (with corresponding parameters.)


This loads the packages [AAp1-AAp8]:


(Note that some of the packages that are imported automatically by [AAp5].)

The functions of the central package in this document, [AAp5], have the prefix “LSAMon”. Here is a sample of those names:


(* {"LSAMon", "LSAMonAddToContext", "LSAMonApplyTermWeightFunctions", <>, "LSAMonUnit", "LSAMonUnitQ", "LSAMonWhen"} *)

Data load

In this section we load a text collection from a specified source.

The text collection from “Presidential Nomination Acceptance Speeches”, [D1], is small and can be used for multiple code verifications and re-runnings. The “State of Union addresses of USA presidents” text collection from [D2] was converted to a Mathematica/WL object by Christopher Wolfram (and sent to me in a private communication.) The text collection [D2] provides far more interesting results (and they are shown below.)

  speeches = ResourceData[ResourceObject["Presidential Nomination Acceptance Speeches"]];
  names = StringSplit[Normal[speeches[[All, "Person"]]][[All, 2]], "::"][[All, 1]],

  (*State of the union addresses provided by Christopher Wolfram. *)      
  Get["~/MathFiles/Digital humanities/Presidential speeches/"];
  names = Normal[speeches[[All, "Name"]]];

dates = Normal[speeches[[All, "Date"]]];
texts = Normal[speeches[[All, "Text"]]];


(* {2453, 4} *)

Basic statistics for the texts

Using different contingency matrices we can derive basic statistical information about the document collection. (The document-word matrix is a contingency matrix.)

First we convert the text data in long-form:

docWordRecords = 
  Join @@ MapThread[
    Thread[{##}] &, {Range@Length@texts, names, 
     DateString[#, {"Year"}] & /@ dates, 
     DeleteStopwords@*TextWords /@ ToLowerCase[texts]}, 1];

Here is a sample of the rows of the long-form:

GridTableForm[RandomSample[docWordRecords, 6], 
 TableHeadings -> {"document index", "name", "year", "word"}]

Here is a summary:

 RecordsSummary[docWordRecords, {"document index", "name", "year", "word"}, "MaxTallies" -> 8], 4, Dividers -> All, Alignment -> Top]

Using the long form we can compute the document-word matrix:

ctMat = CrossTabulate[docWordRecords[[All, {1, -1}]]];
MatrixPlot[Transpose@Sort@Map[# &, Transpose[ctMat@"XTABMatrix"]], 
 MaxPlotPoints -> 300, ImageSize -> 800, 
 AspectRatio -> 1/3]

Here is the president-word matrix:

ctMat = CrossTabulate[docWordRecords[[All, {2, -1}]]];
MatrixPlot[Transpose@Sort@Map[# &, Transpose[ctMat@"XTABMatrix"]], MaxPlotPoints -> 300, ImageSize -> 800, AspectRatio -> 1/3]

Here is an alternative way to compute text collection statistics through the document-word matrix computed within the monad LSAMon:


Procedure application

Stop words

Here is one way to obtain stop words:

stopWords = Complement[DictionaryLookup["*"], DeleteStopwords[DictionaryLookup["*"]]];
RandomSample[stopWords, 12]

(* 304 *)

(* {"has", "almost", "next", "WHO", "seeming", "together", "rather", "runners-up", "there's", "across", "cannot", "me"} *)

We can complete this list with additional stop words derived from the collection itself. (Not done here.)

Linear vector space representation and dimension reduction

Remark: In the rest of the document we use “term” to mean “word” or “stemmed word”.

The following code makes a document-term matrix from the document collection, exaggerates the representations of the terms using “TF-IDF”, and then does topic extraction through dimension reduction. The dimension reduction is done with NNMF; see [AAp3, AA1, AA2].


mObj =
   LSAMonMakeDocumentTermMatrix[{}, stopWords]⟹
   LSAMonTopicExtraction[Max[5, Ceiling[Length[texts]/100]], 60, 12, "MaxSteps" -> 6, "PrintProfilingInfo" -> True];

This table shows the pipeline commands above with comments:

Detailed description

The monad object mObj has a context of named values that is an Association with the following keys:


(* {"texts", "docTermMat", "terms", "wDocTermMat", "W", "H", "topicColumnPositions", "automaticTopicNames"} *)

Let us clarify the values by briefly describing the computational steps.

  1. From texts we derive the document-term matrix \text{docTermMat}\in \mathbb{R}^{m \times n}, where n is the number of documents and m is the number of terms.
    • The terms are words or stemmed words.

    • This is done with LSAMonMakeDocumentTermMatrix.

  2. From docTermMat is derived the (weighted) matrix wDocTermMat using “TF-IDF”.

    • This is done with LSAMonApplyTermWeightFunctions.
  3. Using docTermMat we find the terms that are present in sufficiently large number of documents and their column indices are assigned to topicColumnPositions.

  4. Matrix factorization.

    1. Assign to \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]], \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]]\in \mathbb{R}^{m_1 \times n}, where m_1 = |topicsColumnPositions|.

    2. Compute using NNMF the factorization \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]]\approx H W, where W\in \mathbb{R}^{k \times n}, H\in \mathbb{R}^{k \times m_1}, and k is the number of topics.

    3. The values for the keys “W, “H”, and “topicColumnPositions” are computed and assigned by LSAMonTopicExtraction.

  5. From the top terms of each topic are derived automatic topic names and assigned to the key automaticTopicNames in the monad context.

    • Also done by LSAMonTopicExtraction.

Statistical thesaurus

At this point in the object mObj we have the factors of NNMF. Using those factors we can find a statistical thesaurus for a given set of words. The following code calculates such a thesaurus, and echoes it in a tabulated form.

queryWords = {"arms", "banking", "economy", "education", "freedom", 
   "tariff", "welfare", "disarmament", "health", "police"};

  LSAMonStatisticalThesaurus[queryWords, 12]⟹

By observing the thesaurus entries we can see that the words in each entry are semantically related.

Note, that the word “welfare” strongly associates with “[applause]”. The rest of the query words do not, which can be seen by examining larger thesaurus entries:

thRes =
   LSAMonStatisticalThesaurus[queryWords, 100]⟹
Cases[thRes, "[applause]", Infinity]

(* {"[applause]", "[applause]"} *)

The second “[applause]” associated word is “education”.

Detailed description

The statistical thesaurus is computed by using the NNMF’s right factor H.

For a given term, its corresponding column in H is found and the nearest neighbors of that column are found in the space \mathbb{R}^{m_1} using Euclidean norm.

Extracted topics

The topics are the rows of the right factor H of the factorization obtained with NNMF .

Let us tabulate the topics found above with LSAMonTopicExtraction :

mObj⟹ LSAMonEchoTopicsTable["NumberOfTerms" -> 6, "MagnificationFactor" -> 0.8, Appearance -> "Horizontal"];

Map documents over the topics

The function LSAMonTopicsRepresentation finds the top outliers for each row of NNMF’s left factor W. (The outliers are found using the package [AAp4].) The obtained list of indices gives the topic representation of the collection of texts.


{{53}, {47, 53}, {25}, {46}, {44}, {15, 42}, {18}, <>, {30}, {33}, {7, 60}, {22, 25}, {12, 13, 25, 30, 49, 59}, {48, 57}, {14, 41}}

Further we can see that if the documents have tags associated with them — like author names or dates — we can make a contingency matrix of tags vs topics. (See [AAp8, AA4].) This is also done by the function LSAMonTopicsRepresentation that takes tags as an argument. If the tags argument is Automatic, then the tags are simply the document indices.

Here is a an example:

rsmat = mObj⟹LSAMonTopicsRepresentation[Automatic]⟹LSAMonTakeValue;

Here is an example of calling the function LSAMonTopicsRepresentation with arbitrary tags.

rsmat = mObj⟹LSAMonTopicsRepresentation[DateString[#, "MonthName"] & /@ dates]⟹LSAMonTakeValue;

Note that the matrix plots above are very close to the charting of the Great conversation that we are looking for. This can be made more obvious by observing the row names and columns names in the tabulation of the transposed matrix rsmat:

Magnify[#, 0.6] &@MatrixForm[Transpose[rsmat]]

Charting the great conversation

In this section we show several ways to chart the Great Conversation in the collection of speeches.

There are several possible ways to make the chart: using a time-line plot, using heat-map plot, and using appropriate tabulation (with MatrixForm or Grid).

In order to make the code in this section more concise the package RSparseMatrix.m, [AAp7, AA5], is used.

Topic name to topic words

This command makes an Association between the topic names and the top topic words.

aTopicNameToTopicTable = 
   mObj⟹LSAMonTopicsTable["NumberOfTerms" -> 12]⟹LSAMonTakeValue];

Here is a sample:

Magnify[#, 0.7] &@ aTopicNameToTopicTable[[1 ;; 3]]

Time-line plot

This command makes a contingency matrix between the documents and the topics (as described above):

rsmat = ToRSparseMatrix[mObj⟹LSAMonTopicsRepresentation[Automatic]⟹LSAMonTakeValue]

This time-plot shows great conversation in the USA presidents state of union speeches:

   Tooltip[#2, aTopicNameToTopicTable[#2]] -> dates[[ToExpression@#1]] &, 
 PlotTheme -> "Detailed", ImageSize -> 1000, AspectRatio -> 1/2, PlotLayout -> "Stacked"]

The plot is too cluttered, so it is a good idea to investigate other visualizations.

Topic vs president heatmap

We can use the USA president names instead of years in the Great Conversation chart because the USA presidents terms do not overlap.

This makes a contingency matrix presidents vs topics:

rsmat2 = ToRSparseMatrix[

Here we compute the chronological order of the presidents based on the dates of their speeches:

nameToMeanYearRules = 
  Map[#[[1, 1]] -> Mean[N@#[[All, 2]]] &, 
   GatherBy[MapThread[List, {names, ToExpression[DateString[#, "Year"]] & /@ dates}], First]];
ordRowInds = Ordering[RowNames[rsmat2] /. nameToMeanYearRules];

This heat-map plot uses the (experimental) package HeatmapPlot.m, [AAp6]:

Block[{m = rsmat2[[ordRowInds, All]]},
 HeatmapPlot[SparseArray[m], RowNames[m], 
  Thread[Tooltip[ColumnNames[m], aTopicNameToTopicTable /@ ColumnNames[m]]],
  DistanceFunction -> {None, Sort}, ImageSize -> 1000, 
  AspectRatio -> 1/2]

Note the value of the option DistanceFunction: there is not re-ordering of the rows and columns are reordered by sorting. Also, the topics on the horizontal names have tool-tips.


Text data

[D1] Wolfram Data Repository, "Presidential Nomination Acceptance Speeches".

[D2] US Presidents, State of the Union Addresses, Trajectory, 2016. ‪ISBN‬1681240009, 9781681240008‬.

[D3] Gerhard Peters, "Presidential Nomination Acceptance Speeches and Letters, 1880-2016", The American Presidency Project.

[D4] Gerhard Peters, "State of the Union Addresses and Messages", The American Presidency Project.


[AAp1] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub.

[AAp2] Anton Antonov, Implementation of document-term matrix construction and re-weighting functions in Mathematica(2013), MathematicaForPrediction at GitHub.

[AAp3] Anton Antonov, Implementation of the Non-Negative Matrix Factorization algorithm in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp4] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp5] Anton Antonov, Monadic latent semantic analysis Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp6] Anton Antonov, Heatmap plot Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp7] Anton Antonov, RSparseMatrix Mathematica package, (2015), MathematicaForPrediction at GitHub.

[AAp8] Anton Antonov, Cross tabulation implementation in Mathematica, (2017), MathematicaForPrediction at GitHub.

Books and articles

[AA1] Anton Antonov, "Topic and thesaurus extraction from a document collection", (2013), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, "Statistical thesaurus from NPR podcasts", (2013), MathematicaForPrediction at WordPress blog.

[AA3] Anton Antonov, "Monad code generation and extension", (2017), MathematicaForPrediction at GitHub.

[AA4] Anton Antonov, "Contingency tables creation examples", (2016), MathematicaForPrediction at WordPress blog.

[AA5] Anton Antonov, "RSparseMatrix for sparse matrices with named rows and columns", (2015), MathematicaForPrediction at WordPress blog.

[Wk1] Wikipedia entry, Great Conversation.

[MA1] Mortimer Adler, "The Great Conversation Revisited," in The Great Conversation: A Peoples Guide to Great Books of the Western World, Encyclopædia Britannica, Inc., Chicago,1990, p. 28.

[MA2] Mortimer Adler, "Great Ideas".

[MA3] Mortimer Adler, "How to Think About the Great Ideas: From the Great Books of Western Civilization", 2000, Open Court.


Comparison of dimension reduction algorithms over mandala images generation


This document discusses concrete algorithms for two different approaches of generation of mandala images, [1]: direct construction with graphics primitives, and use of machine learning algorithms.

In the experiments described in this document better results were obtained with the direct algorithms. The direct algorithms were made for the Mathematica StackExchange question "Code that generates a mandala", [3].

The main goals of this document are:

  1. to show some pretty images exploiting symmetry and multiplicity (see this album),

  2. to provide an illustrative example of comparing dimension reduction methods,

  3. to give a set-up for further discussions and investigations on mandala creation with machine learning algorithms.

Two direct construction algorithms are given: one uses "seed" segment rotations, the other superimposing of layers of different types. The following plots show the order in which different mandala parts are created with each of the algorithms.


In this document we use several algorithms for dimension reduction applied to collections of images following the procedure described in [4,5]. We are going to show that with Non-Negative Matrix Factorization (NNMF) we can use mandalas made with the seed segment rotation algorithm to extract layer types and superimpose them to make colored mandalas. Using the same approach with Singular Value Decomposition (SVD) or Independent Component Analysis (ICA) does not produce good layers and the superimposition produces more "watered-down", less diverse mandalas.

From a more general perspective this document compares the statistical approach of "trying to see without looking" with the "direct simulation" approach. Another perspective is the creation of "design spaces"; see [6].

The idea of using machine learning algorithms is appealing because there is no need to make the mental effort of understanding, discerning, approximating, and programming the principles of mandala creation. We can "just" use a large collection of mandala images and generate new ones using the "internal knowledge" data of machine learning algorithms. For example, a Neural network system like Deep Dream, [2], might be made to dream of mandalas.

Direct algorithms for mandala generation

In this section we present two different algorithms for generating mandalas. The first sees a mandala as being generated by rotation of a "seed" segment. The second sees a mandala as being generated by different component layers. For other approaches see [3].

The request of [3] is for generation of mandalas for coloring by hand. That is why the mandala generation algorithms are in the grayscale space. Coloring the generated mandala images is a secondary task.

By seed segment rotations

One way to come up with mandalas is to generate a segment and then by appropriate number of rotations to produce a mandala.

Here is a function and an example of random segment (seed) generation:

MakeSeedSegment[radius_, angle_, n_Integer: 10, 
   connectingFunc_: Polygon, keepGridPoints_: False] :=
   t = Table[
     Line[{radius*r*{Cos[angle], Sin[angle]}, {radius*r, 0}}], {r, 0, 1, 1/n}];
   Join[If[TrueQ[keepGridPoints], t, {}], {GrayLevel[0.25], 
     connectingFunc@RandomSample[Flatten[t /. Line[{x_, y_}] :> {x, y}, 1]]}]

seed = MakeSeedSegment[10, Pi/12, 10];
Graphics[seed, Frame -> True]

This function can make a seed segment symmetric:

MakeSymmetric[seed_] := {seed, 
   GeometricTransformation[seed, ReflectionTransform[{0, 1}]]};

seed = MakeSymmetric[seed];
Graphics[seed, Frame -> True]

Using a seed we can generate mandalas with different specification signatures:

MakeMandala[opts : OptionsPattern[]] :=      
    MakeSeedSegment[20, Pi/12, 12, 
     RandomChoice[{Line, Polygon, BezierCurve, 
       FilledCurve[BezierCurve[#]] &}], False]], Pi/6, opts];

MakeMandala[seed_, angle_?NumericQ, opts : OptionsPattern[]] :=      
    Table[RotationMatrix[a], {a, 0, 2 Pi - angle, angle}]], opts];

This code randomly selects symmetricity and seed generation parameters (number of concentric circles, angles):

n = 12;
     MakeMandala[MakeSeedSegment[10, #2, #3], #2],
      MakeSymmetric[MakeSeedSegment[10, #2, #3, #4, False]], 2 #2]
     ] &, {RandomChoice[{False, True}, n], 
   RandomChoice[{Pi/7, Pi/8, Pi/6}, n], 
   RandomInteger[{8, 14}, n], 
   RandomChoice[{Line, Polygon, BezierCurve, 
     FilledCurve[BezierCurve[#]] &}, n]}]

Here is a more concise way to generate symmetric segment mandalas:

Multicolumn[Table[Image@MakeMandala[], {12}], 5]

Note that with this approach the programming of the mandala coloring is not that trivial — weighted blending of colorized mandalas is the easiest thing to do. (Shown below.)

By layer types

This approach was given by Simon Woods in [3].

"For this one I’ve defined three types of layer, a flower, a simple circle and a ring of small circles. You could add more for greater variety."

The coloring approach with image blending given below did not work well for this algorithm, so I modified the original code in order to produce colored mandalas.

ClearAll[LayerFlower, LayerDisk, LayerSpots, MandalaByLayers]

LayerFlower[n_, a_, r_, colorSchemeInd_Integer] := 
  Module[{b = RandomChoice[{-1/(2 n), 0}]}, {If[
     colorSchemeInd == 0, White, 
     RandomChoice[ColorData[colorSchemeInd, "ColorList"]]], 
      r (a + Cos[n t])/(a + 1) {Cos[t + b Sin[2 n t]], Sin[t + b Sin[2 n t]]}, {t, 0, 2 Pi}], 
     l_Line :> FilledCurve[l], -1]}];

LayerDisk[_, _, r_, colorSchemeInd_Integer] := {If[colorSchemeInd == 0, White, 
    RandomChoice[ColorData[colorSchemeInd, "ColorList"]]], 
   Disk[{0, 0}, r]};

LayerSpots[n_, a_, r_, colorSchemeInd_Integer] := {If[colorSchemeInd == 0, White, 
    RandomChoice[ColorData[colorSchemeInd, "ColorList"]]], 
   Translate[Disk[{0, 0}, r a/(4 n)], r CirclePoints[n]]};

MandalaByLayers[n_, m_, coloring : (False | True) : False, opts : OptionsPattern[]] := 
  Graphics[{EdgeForm[Black], White, 
    Table[RandomChoice[{3, 2, 1} -> {LayerFlower, LayerDisk, LayerSpots}][n, RandomReal[{3, 5}], i, 
       If[coloring, RandomInteger[{1, 17}], 0]]~Rotate~(Pi i/n), {i, m, 1, -1}]}, opts];

Here are generated black-and-white mandalas.

ImageCollage[Table[Image@MandalaByLayers[16, 20], {12}], Background -> White, ImagePadding -> 3, ImageSize -> 1200]

Here are some colored mandalas. (Which make me think more of Viking and Native American art than mandalas.)

ImageCollage[Table[Image@MandalaByLayers[16, 20, True], {12}], Background -> White, ImagePadding -> 3, ImageSize -> 1200]

Training data

Images by direct generation

iSize = 400;

 mandalaImages = 
       MakeSeedSegment[10, Pi/12, 12, RandomChoice[{Polygon, FilledCurve[BezierCurve[#]] &}]], Pi/6], 
     ImageSize -> {iSize, iSize}, ColorSpace -> "Grayscale"], {300}];

(* {39.31, Null} *)

ImageCollage[ColorNegate /@ RandomSample[mandalaImages, 12], Background -> White, ImagePadding -> 3, ImageSize -> 400]

External image data

See the section "Using World Wide Web images".

Direct blending

The most interesting results are obtained with the image blending procedure coded below over mandala images generated with the seed segment rotation algorithm.

directBlendingImages = Table[
         ColorFunction -> 
          RandomChoice[{"IslandColors", "FruitPunchColors", 
            "AvocadoColors", "Rainbow"}]] & /@ 
       RandomChoice[mandalaImages, 4], RandomReal[1, 4]]], {36}];

ImageCollage[directBlendingImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]


Dimension reduction algorithms application

In this section we are going to apply the dimension reduction algorithms Singular Value Decomposition (SVD), Independent Component Analysis (ICA), and Non-Negative Matrix Factorization (NNMF) to a linear vector space representation (a matrix) of an image dataset. In the next section we use the bases generated by those algorithms to make mandala images.
We are going to use the packages [7,8] for ICA and NNMF respectively.


Linear vector space representation

The linear vector space representation of the images is simple — each image is flattened to a vector (row-wise), and the image vectors are put into a matrix.

mandalaMat = Flatten@*ImageData@*ColorNegate /@ mandalaImages;

(* {300, 160000} *)

Re-factoring and basis images

The following code re-factors the images matrix with SVD, ICA, and NNMF and extracts the basis images.

 svdRes = SingularValueDecomposition[mandalaMat, 20];
(* {5.1123, Null} *)

svdBasisImages = Map[ImageAdjust@Image@Partition[#, iSize] &, Transpose@svdRes[[3]]];

 icaRes = 
   IndependentComponentAnalysis[Transpose[mandalaMat], 20, 
    PrecisionGoal -> 4, "MaxSteps" -> 100];
(* {23.41, Null} *)

icaBasisImages = Map[ImageAdjust@Image@Partition[#, iSize] &, Transpose[icaRes[[1]]]];

 nnmfRes = 
   GDCLS[mandalaMat, 20, PrecisionGoal -> 4, 
    "MaxSteps" -> 20, "RegularizationParameter" -> 0.1];
(* {233.209, Null} *)

nnmfBasisImages = Map[ImageAdjust@Image@Partition[#, iSize] &, nnmfRes[[2]]];


Let us visualize the bases derived with the matrix factorization methods.

Grid[{{"SVD", "ICA", "NNMF"},
      Map[ImageCollage[#, Automatic, {400, 500}, 
        Background -> LightBlue, ImagePadding -> 5, ImageSize -> 350] &, 
      {svdBasisImages, icaBasisImages, nnmfBasisImages}]
     }, Dividers -> All]


Here are some observations for the bases.

  1. The SVD basis has an average mandala image as its first vector and the other vectors are "differences" to be added to that first vector.

  2. The SVD and ICA bases are structured similarly. That is because ICA and SVD are both based on orthogonality — ICA factorization uses an orthogonality criteria based on Gaussian noise properties (which is more relaxed than SVD’s standard orthogonality criteria.)

  3. As expected, the NNMF basis images have black background because of the enforced non-negativity. (Black corresponds to 0, white to 1.)

  4. Compared to the SVD and ICA bases the images of the NNMF basis are structured in a radial manner. This can be demonstrated using image binarization.

Grid[{{"SVD", "ICA", "NNMF"}, Map[ImageCollage[Binarize[#, 0.5] & /@ #, Automatic, {400, 500}, Background -> LightBlue, ImagePadding -> 5, ImageSize -> 350] &, {svdBasisImages, icaBasisImages, nnmfBasisImages}] }, Dividers -> All]

We can see that binarizing of the NNMF basis images shows them as mandala layers. In other words, using NNMF we can convert the mandalas of the seed segment rotation algorithm into mandalas generated by an algorithm that superimposes layers of different types.

Blending with image bases samples

In this section we just show different blending images using the SVD, ICA, and NNMF bases.

Blending function definition

Options[MandalaImageBlending] = {"BaseImage" -> {}, "BaseImageWeight" -> Automatic, "PostBlendingFunction" -> (RemoveBackground@*ImageAdjust)};
MandalaImageBlending[basisImages_, nSample_Integer: 4, n_Integer: 12, opts : OptionsPattern[]] :=      
  Block[{baseImage, baseImageWeight, postBlendingFunc, sImgs, sImgWeights},
   baseImage = OptionValue["BaseImage"];
   baseImageWeight = OptionValue["BaseImageWeight"];
   postBlendingFunc = OptionValue["PostBlendingFunction"];
     sImgs = 
      Flatten@Join[{baseImage}, RandomSample[basisImages, nSample]];
     If[NumericQ[baseImageWeight] && ImageQ[baseImage],
      sImgWeights = 
       Join[{baseImageWeight}, RandomReal[1, Length[sImgs] - 1]],
      sImgWeights = RandomReal[1, Length[sImgs]]
          DeleteCases[{opts}, ("BaseImage" -> _) | ("BaseImageWeight" -> _) | ("PostBlendingFunction" -> _)],               
          ColorFunction -> 
           RandomChoice[{"IslandColors", "FruitPunchColors", 
             "AvocadoColors", "Rainbow"}]] & /@ sImgs, 
       sImgWeights]), {n}]

SVD image basis blending

svdBlendedImages = MandalaImageBlending[Rest@svdBasisImages, 4, 24];
ImageCollage[svdBlendedImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]


svdBlendedImages = MandalaImageBlending[Rest@svdBasisImages, 4, 24, "BaseImage" -> First[svdBasisImages], "BaseImageWeight" -> 0.5];
ImageCollage[svdBlendedImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]


ICA image basis blending

icaBlendedImages = MandalaImageBlending[Rest[icaBasisImages], 4, 36, "BaseImage" -> First[icaBasisImages], "BaseImageWeight" -> Automatic];
ImageCollage[icaBlendedImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]


NNMF image basis blending

nnmfBlendedImages = MandalaImageBlending[nnmfBasisImages, 4, 36];
ImageCollage[nnmfBlendedImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]


Using World Wide Web images

A natural question to ask is:

What would be the outcomes of the above procedures to mandala images found in the World Wide Web (WWW) ?

Those WWW images are most likely man made or curated.

The short answer is that the results are not that good. Better results might be obtained using a larger set of WWW images (than just 100 in the experiment results shown below.)

Here is a sample from the WWW mandala images:


Here are the results obtained with NNMF basis:


Future plans

My other motivation for writing this document is to set up a basis for further investigations and discussions on the following topics.

  1. Having a large image database of "real world", human made mandalas.

  2. Utilization of Neural Network algorithms to mandala creation.

  3. Utilization of Cellular Automata to mandala generation.

  4. Investigate mandala morphing and animations.

  5. Making a domain specific language of specifications for mandala creation and modification.

The idea of using machine learning algorithms for mandala image generation was further supported by an image classifier that recognizes fairly well (suitably normalized) mandala images obtained in different ways:



[1] Wikipedia entry: Mandala, .

[2] Wikipedia entry: DeepDream, .

[3] "Code that generates a mandala", Mathematica StackExchange, .

[4] Anton Antonov, "Comparison of PCA and NNMF over image de-noising", (2016), MathematicaForPrediction at WordPress blog. URL: .

[5] Anton Antonov, "Handwritten digits recognition by matrix factorization", (2016), MathematicaForPrediction at WordPress blog. URL: .

[6] Chris Carlson, "Social Exploration of Design Spaces: A Proposal", (2016), Wolfram Technology Conference 2016. URL: http://wac , YouTube: .

[7] Anton Antonov, Independent Component Analysis Mathematica package, (2016), source code at MathematicaForPrediction at GitHub, package IndependentComponentAnalysis.m .

[8] Anton Antonov, Implementation of the Non-Negative Matrix Factorization algorithm in Mathematica, (2013), source code at MathematicaForPrediction at GitHub, package NonNegativeMatrixFactorization.m.

Handwritten digits recognition by matrix factorization


This MathematicaVsR at GitHub project is for comparing Mathematica and R for the tasks of classifier creation, execution, and evaluation using the MNIST database of images of handwritten digits.

Here are the bases built with two different classifiers:

  • Singular Value Decomposition (SVD)


  • Non-Negative Matrix Factorization (NNMF)


Here are the confusion matrices of the two classifiers:

  • SVD


  • NNMF


The blog post "Classification of handwritten digits" (published 2013) has a related more elaborated discussion over a much smaller database of handwritten digits.

Concrete steps

The concrete steps taken in scripts and documents of this project follow.

  1. Ingest the binary data files into arrays that can be visualized as digit images.
  • We have two sets: 60,000 training images and 10,000 testing images.
  1. Make a linear vector space representation of the images by simple unfolding.

  2. For each digit find the corresponding representation matrix and factorize it.

  3. Store the matrix factorization results in a suitable data structure. (These results comprise the classifier training.)

  • One of the matrix factors is seen as a new basis.
  1. For a given test image (and its linear vector space representation) find the basis that approximates it best. The corresponding digit is the classifier prediction for the given test image.

  2. Evaluate the classifier(s) over all test images and compute accuracy, F-Scores, and other measures.


There are scripts going through the steps listed above:


The following documents give expositions that are suitable for reading and following of steps and corresponding results.



I figured out first in R how to ingest the data in the binary files of the MNIST database. There were at least several online resources (blog posts, GitHub repositories) that discuss the MNIST binary files ingestion.

After that making the corresponding code in Mathematica was easy.

Classification results

Same in Mathematica and R for for SVD and NNMF. (As expected.)


NNMF classifiers use the MathematicaForPrediction at GitHub implementations: NonNegativeMatrixFactorization.m and NonNegativeMatrixFactorization.R.

Parallel computations

Both Mathematica and R have relatively simple set-up of parallel computations.


It was not very straightforward to come up in R with visualizations for MNIST images. The Mathematica visualization is much more flexible when it comes to plot labeling.

Going further

Comparison with other classifiers

Using Mathematica’s built-in classifiers it was easy to compare the SVD and NNMF classifiers with neural network ones and others. (The SVD and NNMF are much faster to built and they bring comparable precision.)

It would be nice to repeat that in R using one or several of the neural network classifiers provided by Google, Microsoft, H2O, Baidu, etc.

Classifier ensembles

Another possible extension is to use classifier ensembles and Receiver Operation Characteristic (ROC) to create better classifiers. (Both in Mathematica and R.)

Importance of variables

Using classifier agnostic importance of variables procedure we can figure out :

  • which NNMF basis vectors (images) are most important for the classification precision,

  • which image rows or columns are most important for each digit, or similarly

  • which image squares of a, say, 4×4 image grid are most important.