The Great conversation in USA presidential speeches


This document shows a way to chart in Mathematica / WL the evolution of topics in collections of texts. The making of this document (and related code) is primarily motivated by the fascinating concept of the Great Conversation, [Wk1, MA1]. In brief, all western civilization books are based on 103 great ideas; if we find the great ideas each significant book is based on we can construct a time-line (spanning centuries) of the great conversation between the authors; see [MA1, MA2, MA3].

Instead of finding the great ideas in a text collection we extract topics statistically, using dimension reduction with Non-Negative Matrix Factorization (NNMF), [AAp3, AA1, AA2].

The presented computational results are based on the text collections of State of the Union speeches of USA presidents [D2]. The code in this document can be easily configured to use the much smaller text collection [D1] available online and in Mathematica/WL. (The collection [D1] is fairly small, 51 documents; the collection [D2] is much larger, 2453 documents.)

The procedures (and code) described in this document, of course, work on other types of text collections. For example: movie reviews, podcasts, editorial articles of a magazine, etc.

A secondary objective of this document is to illustrate the use of the monadic programming pipeline as a Software design pattern, [AA3]. In order to make the code concise in this document I wrote the package MonadicLatentSemanticAnalysis.m, [AAp5]. Compare with the code given in [AA1].

The very first version of this document was written for the 2017 summer course “Data Science for the Humanities” at the University of Oxford, UK.

Outline of the procedure applied

The procedure described in this document has the following steps.

  1. Get a collection of documents with known dates of publishing.
    • Or other types of tags associated with the documents.
  2. Do preliminary analysis of the document collection.
    • Number of documents; number of unique words.

    • Number of words per document; number of documents per word.

    • (Some of the statistics of this step are done easier after the Linear vector space representation step.)

  3. Optionally perform Natural Language Processing (NLP) tasks.

    1. Obtain or derive stop words.

    2. Remove stop words from the texts.

    3. Apply stemming to the words in the texts.

  4. Linear vector space representation.

    • This means that we represent the collection with a document-word matrix.

    • Each unique word is a basis vector in that space.

    • For each document the corresponding point in that space is derived from the number of appearances of document’s words.

  5. Extract topics.

    • In this document NNMF is used.

    • In order to obtain better results with NNMF some experimentation and refinements of the topics search have to be done.

  6. Map the documents over the extracted topics.

    • The original matrix of the vector space representation is replaced with a matrix with columns representing topics (instead of words.)
  7. Order the topics according to their presence across the years (or other related tags).
    • This can be done with hierarchical clustering.

    • Alternatively,

      1. for a given topic find the weighted mean of the years of the documents that have that topic, and

      2. order the topics according to those mean values.

  8. Visualize the evolution of the documents according to their topics.

    1. This can be done by simply finding the contingency matrix year vs topic.

    2. For the president speeches we can use the president names for time-line temporal axis instead of years.

      • Because the corresponding time intervals of president office occupation do not overlap.

Remark: Some of the functions used in this document combine several steps into one function call (with corresponding parameters.)


This loads the packages [AAp1-AAp8]:


(Note that some of the packages that are imported automatically by [AAp5].)

The functions of the central package in this document, [AAp5], have the prefix “LSAMon”. Here is a sample of those names:


(* {"LSAMon", "LSAMonAddToContext", "LSAMonApplyTermWeightFunctions", <>, "LSAMonUnit", "LSAMonUnitQ", "LSAMonWhen"} *)

Data load

In this section we load a text collection from a specified source.

The text collection from “Presidential Nomination Acceptance Speeches”, [D1], is small and can be used for multiple code verifications and re-runnings. The “State of Union addresses of USA presidents” text collection from [D2] was converted to a Mathematica/WL object by Christopher Wolfram (and sent to me in a private communication.) The text collection [D2] provides far more interesting results (and they are shown below.)

  speeches = ResourceData[ResourceObject["Presidential Nomination Acceptance Speeches"]];
  names = StringSplit[Normal[speeches[[All, "Person"]]][[All, 2]], "::"][[All, 1]],

  (*State of the union addresses provided by Christopher Wolfram. *)      
  Get["~/MathFiles/Digital humanities/Presidential speeches/"];
  names = Normal[speeches[[All, "Name"]]];

dates = Normal[speeches[[All, "Date"]]];
texts = Normal[speeches[[All, "Text"]]];


(* {2453, 4} *)

Basic statistics for the texts

Using different contingency matrices we can derive basic statistical information about the document collection. (The document-word matrix is a contingency matrix.)

First we convert the text data in long-form:

docWordRecords = 
  Join @@ MapThread[
    Thread[{##}] &, {Range@Length@texts, names, 
     DateString[#, {"Year"}] & /@ dates, 
     DeleteStopwords@*TextWords /@ ToLowerCase[texts]}, 1];

Here is a sample of the rows of the long-form:

GridTableForm[RandomSample[docWordRecords, 6], 
 TableHeadings -> {"document index", "name", "year", "word"}]

Here is a summary:

 RecordsSummary[docWordRecords, {"document index", "name", "year", "word"}, "MaxTallies" -> 8], 4, Dividers -> All, Alignment -> Top]

Using the long form we can compute the document-word matrix:

ctMat = CrossTabulate[docWordRecords[[All, {1, -1}]]];
MatrixPlot[Transpose@Sort@Map[# &, Transpose[ctMat@"XTABMatrix"]], 
 MaxPlotPoints -> 300, ImageSize -> 800, 
 AspectRatio -> 1/3]

Here is the president-word matrix:

ctMat = CrossTabulate[docWordRecords[[All, {2, -1}]]];
MatrixPlot[Transpose@Sort@Map[# &, Transpose[ctMat@"XTABMatrix"]], MaxPlotPoints -> 300, ImageSize -> 800, AspectRatio -> 1/3]

Here is an alternative way to compute text collection statistics through the document-word matrix computed within the monad LSAMon:


Procedure application

Stop words

Here is one way to obtain stop words:

stopWords = Complement[DictionaryLookup["*"], DeleteStopwords[DictionaryLookup["*"]]];
RandomSample[stopWords, 12]

(* 304 *)

(* {"has", "almost", "next", "WHO", "seeming", "together", "rather", "runners-up", "there's", "across", "cannot", "me"} *)

We can complete this list with additional stop words derived from the collection itself. (Not done here.)

Linear vector space representation and dimension reduction

Remark: In the rest of the document we use “term” to mean “word” or “stemmed word”.

The following code makes a document-term matrix from the document collection, exaggerates the representations of the terms using “TF-IDF”, and then does topic extraction through dimension reduction. The dimension reduction is done with NNMF; see [AAp3, AA1, AA2].


mObj =
   LSAMonMakeDocumentTermMatrix[{}, stopWords]⟹
   LSAMonTopicExtraction[Max[5, Ceiling[Length[texts]/100]], 60, 12, "MaxSteps" -> 6, "PrintProfilingInfo" -> True];

This table shows the pipeline commands above with comments:

Detailed description

The monad object mObj has a context of named values that is an Association with the following keys:


(* {"texts", "docTermMat", "terms", "wDocTermMat", "W", "H", "topicColumnPositions", "automaticTopicNames"} *)

Let us clarify the values by briefly describing the computational steps.

  1. From texts we derive the document-term matrix \text{docTermMat}\in \mathbb{R}^{m \times n}, where n is the number of documents and m is the number of terms.
    • The terms are words or stemmed words.

    • This is done with LSAMonMakeDocumentTermMatrix.

  2. From docTermMat is derived the (weighted) matrix wDocTermMat using “TF-IDF”.

    • This is done with LSAMonApplyTermWeightFunctions.
  3. Using docTermMat we find the terms that are present in sufficiently large number of documents and their column indices are assigned to topicColumnPositions.

  4. Matrix factorization.

    1. Assign to \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]], \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]]\in \mathbb{R}^{m_1 \times n}, where m_1 = |topicsColumnPositions|.

    2. Compute using NNMF the factorization \text{wDocTermMat}[[\text{All},\text{topicsColumnPositions}]]\approx H W, where W\in \mathbb{R}^{k \times n}, H\in \mathbb{R}^{k \times m_1}, and k is the number of topics.

    3. The values for the keys “W, “H”, and “topicColumnPositions” are computed and assigned by LSAMonTopicExtraction.

  5. From the top terms of each topic are derived automatic topic names and assigned to the key automaticTopicNames in the monad context.

    • Also done by LSAMonTopicExtraction.

Statistical thesaurus

At this point in the object mObj we have the factors of NNMF. Using those factors we can find a statistical thesaurus for a given set of words. The following code calculates such a thesaurus, and echoes it in a tabulated form.

queryWords = {"arms", "banking", "economy", "education", "freedom", 
   "tariff", "welfare", "disarmament", "health", "police"};

  LSAMonStatisticalThesaurus[queryWords, 12]⟹

By observing the thesaurus entries we can see that the words in each entry are semantically related.

Note, that the word “welfare” strongly associates with “[applause]”. The rest of the query words do not, which can be seen by examining larger thesaurus entries:

thRes =
   LSAMonStatisticalThesaurus[queryWords, 100]⟹
Cases[thRes, "[applause]", Infinity]

(* {"[applause]", "[applause]"} *)

The second “[applause]” associated word is “education”.

Detailed description

The statistical thesaurus is computed by using the NNMF’s right factor H.

For a given term, its corresponding column in H is found and the nearest neighbors of that column are found in the space \mathbb{R}^{m_1} using Euclidean norm.

Extracted topics

The topics are the rows of the right factor H of the factorization obtained with NNMF .

Let us tabulate the topics found above with LSAMonTopicExtraction :

mObj⟹ LSAMonEchoTopicsTable["NumberOfTerms" -> 6, "MagnificationFactor" -> 0.8, Appearance -> "Horizontal"];

Map documents over the topics

The function LSAMonTopicsRepresentation finds the top outliers for each row of NNMF’s left factor W. (The outliers are found using the package [AAp4].) The obtained list of indices gives the topic representation of the collection of texts.


{{53}, {47, 53}, {25}, {46}, {44}, {15, 42}, {18}, <>, {30}, {33}, {7, 60}, {22, 25}, {12, 13, 25, 30, 49, 59}, {48, 57}, {14, 41}}

Further we can see that if the documents have tags associated with them — like author names or dates — we can make a contingency matrix of tags vs topics. (See [AAp8, AA4].) This is also done by the function LSAMonTopicsRepresentation that takes tags as an argument. If the tags argument is Automatic, then the tags are simply the document indices.

Here is a an example:

rsmat = mObj⟹LSAMonTopicsRepresentation[Automatic]⟹LSAMonTakeValue;

Here is an example of calling the function LSAMonTopicsRepresentation with arbitrary tags.

rsmat = mObj⟹LSAMonTopicsRepresentation[DateString[#, "MonthName"] & /@ dates]⟹LSAMonTakeValue;

Note that the matrix plots above are very close to the charting of the Great conversation that we are looking for. This can be made more obvious by observing the row names and columns names in the tabulation of the transposed matrix rsmat:

Magnify[#, 0.6] &@MatrixForm[Transpose[rsmat]]

Charting the great conversation

In this section we show several ways to chart the Great Conversation in the collection of speeches.

There are several possible ways to make the chart: using a time-line plot, using heat-map plot, and using appropriate tabulation (with MatrixForm or Grid).

In order to make the code in this section more concise the package RSparseMatrix.m, [AAp7, AA5], is used.

Topic name to topic words

This command makes an Association between the topic names and the top topic words.

aTopicNameToTopicTable = 
   mObj⟹LSAMonTopicsTable["NumberOfTerms" -> 12]⟹LSAMonTakeValue];

Here is a sample:

Magnify[#, 0.7] &@ aTopicNameToTopicTable[[1 ;; 3]]

Time-line plot

This command makes a contingency matrix between the documents and the topics (as described above):

rsmat = ToRSparseMatrix[mObj⟹LSAMonTopicsRepresentation[Automatic]⟹LSAMonTakeValue]

This time-plot shows great conversation in the USA presidents state of union speeches:

   Tooltip[#2, aTopicNameToTopicTable[#2]] -> dates[[ToExpression@#1]] &, 
 PlotTheme -> "Detailed", ImageSize -> 1000, AspectRatio -> 1/2, PlotLayout -> "Stacked"]

The plot is too cluttered, so it is a good idea to investigate other visualizations.

Topic vs president heatmap

We can use the USA president names instead of years in the Great Conversation chart because the USA presidents terms do not overlap.

This makes a contingency matrix presidents vs topics:

rsmat2 = ToRSparseMatrix[

Here we compute the chronological order of the presidents based on the dates of their speeches:

nameToMeanYearRules = 
  Map[#[[1, 1]] -> Mean[N@#[[All, 2]]] &, 
   GatherBy[MapThread[List, {names, ToExpression[DateString[#, "Year"]] & /@ dates}], First]];
ordRowInds = Ordering[RowNames[rsmat2] /. nameToMeanYearRules];

This heat-map plot uses the (experimental) package HeatmapPlot.m, [AAp6]:

Block[{m = rsmat2[[ordRowInds, All]]},
 HeatmapPlot[SparseArray[m], RowNames[m], 
  Thread[Tooltip[ColumnNames[m], aTopicNameToTopicTable /@ ColumnNames[m]]],
  DistanceFunction -> {None, Sort}, ImageSize -> 1000, 
  AspectRatio -> 1/2]

Note the value of the option DistanceFunction: there is not re-ordering of the rows and columns are reordered by sorting. Also, the topics on the horizontal names have tool-tips.


Text data

[D1] Wolfram Data Repository, "Presidential Nomination Acceptance Speeches".

[D2] US Presidents, State of the Union Addresses, Trajectory, 2016. ‪ISBN‬1681240009, 9781681240008‬.

[D3] Gerhard Peters, "Presidential Nomination Acceptance Speeches and Letters, 1880-2016", The American Presidency Project.

[D4] Gerhard Peters, "State of the Union Addresses and Messages", The American Presidency Project.


[AAp1] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub.

[AAp2] Anton Antonov, Implementation of document-term matrix construction and re-weighting functions in Mathematica(2013), MathematicaForPrediction at GitHub.

[AAp3] Anton Antonov, Implementation of the Non-Negative Matrix Factorization algorithm in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp4] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub.

[AAp5] Anton Antonov, Monadic latent semantic analysis Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp6] Anton Antonov, Heatmap plot Mathematica package, (2017), MathematicaForPrediction at GitHub.

[AAp7] Anton Antonov, RSparseMatrix Mathematica package, (2015), MathematicaForPrediction at GitHub.

[AAp8] Anton Antonov, Cross tabulation implementation in Mathematica, (2017), MathematicaForPrediction at GitHub.

Books and articles

[AA1] Anton Antonov, "Topic and thesaurus extraction from a document collection", (2013), MathematicaForPrediction at GitHub.

[AA2] Anton Antonov, "Statistical thesaurus from NPR podcasts", (2013), MathematicaForPrediction at WordPress blog.

[AA3] Anton Antonov, "Monad code generation and extension", (2017), MathematicaForPrediction at GitHub.

[AA4] Anton Antonov, "Contingency tables creation examples", (2016), MathematicaForPrediction at WordPress blog.

[AA5] Anton Antonov, "RSparseMatrix for sparse matrices with named rows and columns", (2015), MathematicaForPrediction at WordPress blog.

[Wk1] Wikipedia entry, Great Conversation.

[MA1] Mortimer Adler, "The Great Conversation Revisited," in The Great Conversation: A Peoples Guide to Great Books of the Western World, Encyclopædia Britannica, Inc., Chicago,1990, p. 28.

[MA2] Mortimer Adler, "Great Ideas".

[MA3] Mortimer Adler, "How to Think About the Great Ideas: From the Great Books of Western Civilization", 2000, Open Court.


Phone dialing conversational agent


This blog post proclaims the first committed project in the repository ConversationalAgents at GitHub. The project has designs and implementations of a phone calling conversational agent that aims at providing the following functionalities:

  • contacts retrieval (querying, filtering, selection),
  • contacts prioritization, and
  • phone call (work flow) handling.
  • The design is based on a Finite State Machine (FSM) and context free grammar(s) for commands that switch between the states of the FSM. The grammar is designed as a context free grammar rules of a Domain Specific Language (DSL) in Extended Backus-Naur Form (EBNF). (For more details on DSLs design and programming see [1].)

    The (current) implementation is with Wolfram Language (WL) / Mathematica using the functional parsers package [2, 3].

    This movie gives an overview from an end user perspective.

    General design

    The design of the Phone Conversational Agent (PhCA) is derived in a straightforward manner from the typical work flow of calling a contact (using, say, a mobile phone.)

    The main goals for the conversational agent are the following:

    1. contacts retrieval — search, filtering, selection — using both natural language commands and manual interaction,
    2. intuitive integration with the usual work flow of phone calling.

    An additional goal is to facilitate contacts retrieval by determining the most appropriate contacts in query responses. For example, while driving to work by pressing the dial button we might prefer the contacts of an up-coming meeting to be placed on top of the prompting contacts list.

    In this project we assume that the voice to text conversion is done with an external (reliable) component.

    It is assumed that an user of PhCA can react to both visual and spoken query results.

    The main algorithm is the following.

    1) Parse and interpret a natural language command.

    2) If the command is a contacts query that returns a single contact then call that contact.

    3) If the command is a contacts query that returns multiple contacts then :

    3.1) use natural language commands to refine and filter the query results,

    3.2) until a single contact is obtained. Call that single contact.

    4) If other type of command is given act accordingly.

    PhCA has commands for system usage help and for canceling the current contact search and starting over.

    The following FSM diagram gives the basic structure of PhCA:


    This movie demonstrates how different natural language commands switch the FSM states.

    Grammar design

    The derived grammar describes sentences that: 1. fit end user expectations, and 2. are used to switch between the FSM states.

    Because of the simplicity of the FSM and the natural language commands only few iterations were done with the Parser-generation-by-grammars work flow.

    The base grammar is given in the file "./Mathematica/PhoneCallingDialogsGrammarRules.m" in EBNF used by [2].

    Here are parsing results of a set of test natural language commands:


    using the WL command:

    ParsingTestTable[ParseJust[pCALLCONTACT\[CirclePlus]pCALLFILTER], ToLowerCase /@ queries]

    (Note that according to PhCA’s FSM diagram the parsing of pCALLCONTACT is separated from pCALLFILTER, hence the need to combine the two parsers in the code line above.)

    PhCA’s FSM implementation provides interpretation and context of the functional programming expressions obtained by the parser.

    In the running script "./Mathematica/PhoneDialingAgentRunScript.m" the grammar parsers are modified to do successful parsing using data elements of the provided fake address book.

    The base grammar can be extended with the "Time specifications grammar" in order to include queries based on temporal commands.


    In order to experiment with the agent just run in Mathematica the command:


    The imported Wolfram Language file, "./Mathematica/PhoneDialingAgentRunScript.m", uses a fake address book based on movie creators metadata. The code structure of "./Mathematica/PhoneDialingAgentRunScript.m" allows easy experimentation and modification of the running steps.

    Here are several screen-shots illustrating a particular usage path (scan left-to-right):

    "PhCA-1-call-someone-from-x-men"" "PhCA-2-a-producer" "PhCA-3-the-third-one

    See this movie demonstrating a PhCA run.


    [1] Anton Antonov, "Creating and programming domain specific languages", (2016), MathematicaForPrediction at WordPress blog.

    [2] Anton Antonov, Functional parsers, Mathematica package, MathematicaForPrediction at GitHub, 2014.

    [3] Anton Antonov, "Natural language processing with functional parsers", (2014), MathematicaForPrediction at WordPress blog.

    Text analysis of Trump tweets


    This post is to proclaim the MathematicaVsR at GitHub project “Text analysis of Trump tweets” in which we compare Mathematica and R over text analyses of Twitter messages made by Donald Trump (and his staff) before the USA president elections in 2016.

    The project follows and extends the exposition and analysis of the R-based blog post "Text analysis of Trump’s tweets confirms he writes only the (angrier) Android half" by David Robinson at; see [1].

    The blog post [1] links to several sources that claim that during the election campaign Donald Trump tweeted from his Android phone and his campaign staff tweeted from an iPhone. The blog post [1] examines this hypothesis in a quantitative way (using various R packages.)

    The hypothesis in question is well summarized with the tweet:

    Every non-hyperbolic tweet is from iPhone (his staff).
    Every hyperbolic tweet is from Android (from him).
    — Todd Vaziri (@tvaziri) August 6, 2016

    This conjecture is fairly well supported by the following mosaic plots, [2]:

    TextAnalysisOfTrumpTweets-iPhone-MosaicPlot-Sentiment-Device TextAnalysisOfTrumpTweets-iPhone-MosaicPlot-Device-Weekday-Sentiment

    We can see the that Twitter messages from iPhone are much more likely to be neutral, and the ones from Android are much more polarized. As Christian Rudder (one of the founders of OkCupid, a dating website) explains in the chapter "Death by a Thousand Mehs" of the book "Dataclysm", [3], having a polarizing image (online persona) is as a very good strategy to engage online audience:

    […] And the effect isn’t small-being highly polarizing will in fact get you about 70 percent more messages. That means variance allows you to effectively jump several "leagues" up in the dating pecking order – […]

    (The mosaic plots above were made for the Mathematica-part of this project. Mosaic plots and weekday tags are not used in [1].)

    Concrete steps

    The Mathematica-part of this project does not follow closely the blog post [1]. After the ingestion of the data provided in [1], the Mathematica-part applies alternative algorithms to support and extend the analysis in [1].

    The sections in the R-part notebook correspond to some — not all — of the sections in the Mathematica-part.

    The following list of steps is for the Mathematica-part.

    1. Data ingestion
      • The blog post [1] shows how to do in R the ingestion of Twitter data of Donald Trump messages.

      • That can be done in Mathematica too using the built-in function ServiceConnect, but that is not necessary since [1] provides a link to the ingested data used [1]:

      • Which leads to the ingesting of an R data frame in the Mathematica-part using RLink.

    2. Adding tags

      • We have to extract device tags for the messages — each message is associated with one of the tags "Android", "iPad", or "iPhone".

      • Using the message time-stamps each message is associated with time tags corresponding to the creation time month, hour, weekday, etc.

      • Here is summary of the data at this stage:


    3. Time series and time related distributions

      • We can make several types of time series plots for general insight and to support the main conjecture.

      • Here is a Mathematica made plot for the same statistic computed in [1] that shows differences in tweet posting behavior:


      • Here are distributions plots of tweets per weekday:


    4. Classification into sentiments and Facebook topics

      • Using the built-in classifiers of Mathematica each tweet message is associated with a sentiment tag and a Facebook topic tag.

      • In [1] the results of this step are derived in several stages.

      • Here is a mosaic plot for conditional probabilities of devices, topics, and sentiments:


    5. Device-word association rules

      • Using Association rule learning device tags are associated with words in the tweets.

      • In the Mathematica-part these associations rules are not needed for the sentiment analysis (because of the built-in classifiers.)

      • The association rule mining is done mostly to support and extend the text analysis in [1] and, of course, for comparison purposes.

      • Here is an example of derived association rules together with their most important measures:


    In [1] the sentiments are derived from computed device-word associations, so in [1] the order of steps is 1-2-3-5-4. In Mathematica we do not need the steps 3 and 5 in order to get the sentiments in the 4th step.


    Using Mathematica for sentiment analysis is much more direct because of the built-in classifiers.

    The R-based blog post [1] uses heavily the "pipeline" operator %>% which is kind of a recent addition to R (and it is both fashionable and convenient to use it.) In Mathematica the related operators are Postfix (//), Prefix (@), Infix (~~), Composition (@*), and RightComposition (/*).

    Making the time series plots with the R package "ggplot2" requires making special data frames. I am inclined to think that the Mathematica plotting of time series is more direct, but for this task the data wrangling codes in Mathematica and R are fairly comparable.

    Generally speaking, the R package "arules" — used in this project for Associations rule learning — is somewhat awkward to use:

    • it is data frame centric, does not work directly with lists of lists, and

    • requires the use of factors.

    The Apriori implementation in “arules” is much faster than the one in “AprioriAlgorithm.m” — “arules” uses a more efficient algorithm implemented in C.


    [1] David Robinson, "Text analysis of Trump’s tweets confirms he writes only the (angrier) Android half", (2016),

    [2] Anton Antonov, "Mosaic plots for data visualization", (2014), MathematicaForPrediction at WordPress.

    [3] Christian Rudder, Dataclysm, Crown, 2014. ASIN: B00J1IQUX8 .

    Creating and programming domain specific languages


    In this blog post I will provide links to documents, packages, blog posts, and discussions for creating and utilizing Domain Specific Languages (DSLs). I have discussed a few DSLs in previous blog posts (linked below). This blog post provides a more general, higher level view on the application and creation of DSLs. The concrete examples are with Mathematica, but the steps are general and can be done with any programming languages and tools.

    When to apply DSLs

    Here are some situations for applying DSLs.

    1. When designing conversational engines.
    2.  When there are too many usage scenarios and tuning options for the developed algorithms.
      • For example, we have a bunch of search, recommendation, and interaction algorithms for a dating site. A different, User Experience Department (UED) designs interactive user interfaces for these algorithms. We make a natural language DSL that invokes the different algorithms according to specified outcomes. With the DSL the different designs produced by UED are much easily prototyped, implemented, or fleshed out. The DSL also gives to UED easier to understand view on the functionalities provided by the algorithms.
    3. When designing an API for a collection of algorithms.
      • Just designing a DSL can bring clarity of what signatures should be in the API.
      • NIntegrate‘s Method option was designed and implemented using a DSL. See this video between 25:00 and 27:30.

    Designing DSLs

    1. Decide what kind of sentences the DSL is going to have.
      • Are natural language sentences going to be used?
      • Are the language words known beforehand or not?
    2. Prepare, create, or accumulate a list of representative sentences.
      • In some cases using Morphological Analysis can greatly help for coming up with use cases and the corresponding sentences.
    3. Create a context free grammar that describes the sentences from the previous step. (Or a large subset of them.)
      • At this stage I use exclusively Extended Backus-Naur Form (EBNF).
      • In some cases the grammar terminals are not know at the design stage and have to retrieved in some way. (From a database or though natural language processing.)
      • Some conversational engine systems allow or require to the grammar specification to be done in XML. I would still do BNF and then move to XML
        •  It is not that hard to write a parser-and-interpreter that translates BNF into XML. See the end of this blog post for that kind of translation of BNF into OMPL.
    4. Program parser(s) for the grammar.
      • I use most of the time functional parsers.
      • The package FunctionalParsers.m provides a Mathematica implementation of this kind of parsing.
      • The package can automatically generate parsers from a grammar given in EBNF. (See the coding example below.)
      • I have programmed versions of this package in R and Lua.
    5. Program an interpreter for the parsed sentences.
      • At this stage the parsed sentences are hooked to the algorithms of the problem domain.
      • The package FunctionalParsers.m allows this to be done fairly easy.
    6. Test the parsing and interpretation.

    See the code example below illustrating steps 3-6.

    Introduction to using DSLs in Mathematica

    1. This blog post “Natural language processing with functional parsers” gives an introduction to the DSL application in Mathematica.
    2. This detailed slide-show presentation “Functional parsers for an integration requests language grammar” shows how to use the package FunctionalParsers.m over a small grammar.
    3. The answer of the MSE question “How to parse a clojure expression?” gives a good introduction with a simple grammar and shows both direct parser programming and automatic generation from EBNF.

    Advanced example

    The blog post “Simple time series conversational engine” discusses the creation (design and programming) of a simple conversational engine for time series analysis (data loading, finding outliers and trends.)

    Here is a movie demonstrating that conversational engine:

    Other discussions

    1. A small part, from 17:30 to 21:00, of the WTC 2012 “Spatial Access Methods and Route Finding” presentation shows a DSL for points of interest queries.
    2. The answer of the MSE question “CSS Selectors for Symbolic XML” uses FunctionalParsers.m .
    3. This Quantile Regression presentation is aided by the  “Simple time series conversational engine” mentioned above.

    Coding example

    This coding example demonstrates steps 3-6 discussed above.



    Natural language processing with functional parsers

    Natural language Processing (NLP) can be done with a structural approach using grammar rules. (The other type of NLP is using statistical methods.) In this post I discuss the use of functional parsers for the parsing and interpretation of small sets of natural language sentences within specific contexts. Functional parsing is also known as monadic parsing and parsing combinators.

    Generally, I am using functional parsers to make Domain-Specific Languages (DSL’s). I use DSL’s to make command interfaces to search and recommendation engines and also to design and prototype conversational engines. I use extensively the so called Backus-Naur Form (BNF) for the grammar specifications. Clearly a DSL can be very close to natural language and provide sufficient means for interpretation within a given (narrow) context. (Like function integration in Calculus, smart phone directory browsing and selection, or search for something to eat nearby.)

    I implemented and uploaded a package for construction of functional parsers: see FunctionalParsers.m hosted by the project MathematicaForPrediction at GitHub.

    The package provides ability to quickly program parsers using a core system of functional parsers as described in the article “Functional parsers” by Jeroen Fokker .

    The parsers (in both the package and the article) are categorized in the groups: basic, combinators, and transformers. Immediate interpretation can be done with transformer parsers, but the package also provides functions for evaluation of parser output within a context of data and functions.

    Probably most importantly, the package provides functions for automatic generation of parsers from grammars in EBNF.

    Here is an example of parsing the sentences of an integration requests language:

    Interpretation of integration requests

    Here is the grammar:
    Integration requests EBNF grammar

    The grammar can be visualized with this mind map:

    Integration command

    The mind map was hand made with MindNode Pro. Generally, the branches represent alternatives, but if two branches are connected the direction of the arrow connecting them shows a sequence combination.

    With the FunctionalParsers.m package we can automatically generate a
    mind map parsing the string of the grammar in EBNF to OMPL:


    (And here is a PDF of the automatically generated mind map: IntegrationRequestsGenerated . )

    I also made a slide show that gives an introduction to how the package is used: “Functional parsers for an integration requests language grammar”.

    A more complicated example is this conversational engine for manipulation of time series data. (Data loading, finding outliers and trends. More details in the next blog post.)

    Statistical thesaurus from NPR podcasts

    Five months ago I worked with transcripts of National Public Radio (NPR) podcasts. The transcripts are available at — see for example “From child actor to artist…“.

    Using nearly 5000 transcripts I experimented with topic extraction and statistical thesaurus derivation. The topics are too bulky to show here, but I am going to show some of the statistical thesaurus entries.

    I used dimension reduction with Non-Negative Matrix Factorization (NNMF). For more detailed explanations, code for computations, and experimental results see this paper “Topic and thesaurus extraction from a document collection” provided by the MathematicaForPrediction project at GitHub. (The code for NNMF is also provided by the MathematicaForPrediction project at GitHub.)

    First let me describe the data. The collection has 5123 transcripts.

    Here is a sample of the transcripts (only the first 400 characters of each are taken):
    NPR podcast sample 400 characters per podcast

    Here is the distribution of the string lengths of the transcripts:
    5123 NPR podcasts string length

    I removed custom selected stop words from the transcripts. I also stemmed the words using the stemmer called snowball, see The stemmed words are called “terms” below.

    Here are descriptive statistics and the distribution of the number of transcripts per term:
    Transcripts per term

    Here are descriptive statistics and the distribution of the number of terms per transcript:
    Terms per transcript

    I did not compute the whole statistical thesaurus. Instead I made a function that computes the thesaurus entry of a given word using the right NNMF factor with proper normalization.

    Here are sample results of the thesaurus entry “retrieval” (note that the right column contains word stems):
    Statistical thesaurus entries