Phone dialing conversational agent

Introduction

This blog post proclaims the first committed project in the repository ConversationalAgents at GitHub. The project has designs and implementations of a phone calling conversational agent that aims at providing the following functionalities:

  • contacts retrieval (querying, filtering, selection),
  • contacts prioritization, and
  • phone call (work flow) handling.
  • The design is based on a Finite State Machine (FSM) and context free grammar(s) for commands that switch between the states of the FSM. The grammar is designed as a context free grammar rules of a Domain Specific Language (DSL) in Extended Backus-Naur Form (EBNF). (For more details on DSLs design and programming see [1].)

    The (current) implementation is with Wolfram Language (WL) / Mathematica using the functional parsers package [2, 3].

    This movie gives an overview from an end user perspective.

    General design

    The design of the Phone Conversational Agent (PhCA) is derived in a straightforward manner from the typical work flow of calling a contact (using, say, a mobile phone.)

    The main goals for the conversational agent are the following:

    1. contacts retrieval — search, filtering, selection — using both natural language commands and manual interaction,
    2. intuitive integration with the usual work flow of phone calling.

    An additional goal is to facilitate contacts retrieval by determining the most appropriate contacts in query responses. For example, while driving to work by pressing the dial button we might prefer the contacts of an up-coming meeting to be placed on top of the prompting contacts list.

    In this project we assume that the voice to text conversion is done with an external (reliable) component.

    It is assumed that an user of PhCA can react to both visual and spoken query results.

    The main algorithm is the following.

    1) Parse and interpret a natural language command.

    2) If the command is a contacts query that returns a single contact then call that contact.

    3) If the command is a contacts query that returns multiple contacts then :

    3.1) use natural language commands to refine and filter the query results,

    3.2) until a single contact is obtained. Call that single contact.

    4) If other type of command is given act accordingly.

    PhCA has commands for system usage help and for canceling the current contact search and starting over.

    The following FSM diagram gives the basic structure of PhCA:

    "Phone-conversational-agent-FSM-and-DB"

    This movie demonstrates how different natural language commands switch the FSM states.

    Grammar design

    The derived grammar describes sentences that: 1. fit end user expectations, and 2. are used to switch between the FSM states.

    Because of the simplicity of the FSM and the natural language commands only few iterations were done with the Parser-generation-by-grammars work flow.

    The base grammar is given in the file "./Mathematica/PhoneCallingDialogsGrammarRules.m" in EBNF used by [2].

    Here are parsing results of a set of test natural language commands:

    "PhCA-base-grammar-test-queries-125"

    using the WL command:

    ParsingTestTable[ParseJust[pCALLCONTACT\[CirclePlus]pCALLFILTER], ToLowerCase /@ queries]
     

    (Note that according to PhCA’s FSM diagram the parsing of pCALLCONTACT is separated from pCALLFILTER, hence the need to combine the two parsers in the code line above.)

    PhCA’s FSM implementation provides interpretation and context of the functional programming expressions obtained by the parser.

    In the running script "./Mathematica/PhoneDialingAgentRunScript.m" the grammar parsers are modified to do successful parsing using data elements of the provided fake address book.

    The base grammar can be extended with the "Time specifications grammar" in order to include queries based on temporal commands.

    Running

    In order to experiment with the agent just run in Mathematica the command:

    Import["https://raw.githubusercontent.com/antononcube/ConversationalAgents/master/Projects/PhoneDialingDialogsAgent/Mathematica/PhoneDialingAgentRunScript.m"]

    The imported Wolfram Language file, "./Mathematica/PhoneDialingAgentRunScript.m", uses a fake address book based on movie creators metadata. The code structure of "./Mathematica/PhoneDialingAgentRunScript.m" allows easy experimentation and modification of the running steps.

    Here are several screen-shots illustrating a particular usage path (scan left-to-right):

    "PhCA-1-call-someone-from-x-men"" "PhCA-2-a-producer" "PhCA-3-the-third-one

    See this movie demonstrating a PhCA run.

    References

    [1] Anton Antonov, "Creating and programming domain specific languages", (2016), MathematicaForPrediction at WordPress blog.

    [2] Anton Antonov, Functional parsers, Mathematica package, MathematicaForPrediction at GitHub, 2014.

    [3] Anton Antonov, "Natural language processing with functional parsers", (2014), MathematicaForPrediction at WordPress blog.

    Advertisements

    Monad code generation and extension

    … in Mathematica / Wolfram Language

    Anton Antonov

    MathematicaForPrediction at GitHub

    MathematicaVsR at GitHub

    June 2017

    Introduction

    This document aims to introduce monadic programming in Mathematica / Wolfram Language (WL) in a concise and code-direct manner. The core of the monad codes discussed is simple, derived from the fundamental principles of Mathematica / WL.

    The usefulness of the monadic programming approach manifests in multiple ways. Here are a few we are interested in:

    1. easy to construct, read, and modify sequences of commands (pipelines),
    2. easy to program polymorphic behaviour,
    3. easy to program context utilization.

    Speaking informally,

    • Monad programming provides an interface that allows interactive, dynamic creation and change of sequentially structured computations with polymorphic and context-aware behavior.

    The theoretical background provided in this document is given in the Wikipedia article on Monadic programming, [Wk1], and the article “The essence of functional programming” by Philip Wadler, [H3]. The code in this document is based on the primary monad definition given in [Wk1,H3]. (Based on the “Kleisli triple” and used in Haskell.)

    The general monad structure can be seen as:

    1. a software design pattern;
    2. a fundamental programming construct (similar to class in object-oriented programming);
    3. an interface for software types to have implementations of.

    In this document we treat the monad structure as a design pattern, [Wk3]. (After reading [H3] point 2 becomes more obvious. A similar in spirit, minimalistic approach to Object-oriented Design Patterns is given in [AA1].)

    We do not deal with types for monads explicitly, we generate code for monads instead. One reason for this is the “monad design pattern” perspective; another one is that in Mathematica / WL the notion of algebraic data type is not needed — pattern matching comes from the core “book of replacement rules” principle.

    The rest of the document is organized as follows.

    1. Fundamental sections The section “What is a monad?” gives the necessary definitions. The section “The basic Maybe monad” shows how to program a monad from scratch in Mathematica / WL. The section “Extensions with polymorphic behavior” shows how extensions of the basic monad functions can be made. (These three sections form a complete read on monadic programming, the rest of the document can be skipped.)

    2. Monadic programming in practice The section “Monad code generation” describes packages for generating monad code. The section “Flow control in monads” describes additional, control flow functionalities. The section “General work-flow of monad code generation utilization” gives a general perspective on the use of monad code generation. The section “Software design with monadic programming” discusses (small scale) software design with monadic programming.

    3. Case study sections The case study sections “Contextual monad classification” and “Tracing monad pipelines” hopefully have interesting and engaging examples of monad code generation, extension, and utilization.

    What is a monad?

    The monad definition

    In this document a monad is any set of a symbol m and two operators unit and bind that adhere to the monad laws. (See the next sub-section.) The definition is taken from [Wk1] and [H3] and phrased in Mathematica / WL terms in this section. In order to be brief, we deliberately do not consider the equivalent monad definition based on unit, join, and map (also given in [H3].)

    Here are operators for a monad associated with a certain symbol M:

    1. monad unit function (“return” in Haskell notation) is Unit[x_] := M[x];
    2. monad bind function (“>>=” in Haskell notation) is a rule like Bind[M[x_], f_] := f[x] with MatchQ[f[x],M[_]] giving True.

    Note that:

    • the function Bind unwraps the content of M[_] and gives it to the function f;
    • the functions fi are responsible to return results wrapped with the monad symbol M.

    Here is an illustration formula showing a monad pipeline:

    Monad-formula-generic

    Monad-formula-generic

    From the definition and formula it should be clear that if for the result of Bind[_M,f[x]] the test MatchQ[f[x],_M] is True then the result is ready to be fed to the next binding operation in monad’s pipeline. Also, it is clear that it is easy to program the pipeline functionality with Fold:

    Fold[Bind, M[x], {f1, f2, f3}]
    
    (* Bind[Bind[Bind[M[x], f1], f2], f3] *)

    The monad laws

    The monad laws definitions are taken from [H1] and [H3].In the monad laws given below the symbol “⟹” is for monad’s binding operation and ↦ is for a function in anonymous form.

    Here is a table with the laws:

    Remark: The monad laws are satisfied for every symbol in Mathematica / WL with List being the unit operation and Apply being the binding operation.

    Expected monadic programming features

    Looking at formula (1) — and having certain programming experiences — we can expect the following features when using monadic programming.

    • Computations that can be expressed with monad pipelines are easy to construct and read.
    • By programming the binding function we can tuck-in a variety of monad behaviours — this is the so called “programmable semicolon” feature of monads.
    • Monad pipelines can be constructed with Fold, but with suitable definitions of infix operators like DoubleLongRightArrow (⟹) we can produce code that resembles the pipeline in formula (1).
    • A monad pipeline can have polymorphic behaviour by overloading the signatures of fi (and if we have to, Bind.)

    These points are clarified below. For more complete discussions see [Wk1] or [H3].

    The basic Maybe monad

    It is fairly easy to program the basic monad Maybe discussed in [Wk1].

    The goal of the Maybe monad is to provide easy exception handling in a sequence of chained computational steps. If one of the computation steps fails then the whole pipeline returns a designated failure symbol, say None otherwise the result after the last step is wrapped in another designated symbol, say Maybe.

    Here is the special version of the generic pipeline formula (1) for the Maybe monad:

    "Monad-formula-maybe"

    “Monad-formula-maybe”

    Here is the minimal code to get a functional Maybe monad (for a more detailed exposition of code and explanations see [AA7]):

    MaybeUnitQ[x_] := MatchQ[x, None] || MatchQ[x, Maybe[___]];
    
    MaybeUnit[None] := None;
    MaybeUnit[x_] := Maybe[x];
    
    MaybeBind[None, f_] := None;
    MaybeBind[Maybe[x_], f_] := 
      Block[{res = f[x]}, If[FreeQ[res, None], res, None]];
    
    MaybeEcho[x_] := Maybe@Echo[x];
    MaybeEchoFunction[f___][x_] := Maybe@EchoFunction[f][x];
    
    MaybeOption[f_][xs_] := 
      Block[{res = f[xs]}, If[FreeQ[res, None], res, Maybe@xs]];

    In order to make the pipeline form of the code we write let us give definitions to a suitable infix operator (like “⟹”) to use MaybeBind:

    DoubleLongRightArrow[x_?MaybeUnitQ, f_] := MaybeBind[x, f];
    DoubleLongRightArrow[x_, y_, z__] := 
      DoubleLongRightArrow[DoubleLongRightArrow[x, y], z];

    Here is an example of a Maybe monad pipeline using the definitions so far:

    data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};
    
    MaybeUnit[data]⟹(* lift data into the monad *)
     (Maybe@ Join[#, RandomInteger[8, 3]] &)⟹(* add more values *)
     MaybeEcho⟹(* display current value *)
     (Maybe @ Map[If[# < 0.4, None, #] &, #] &)(* map values that are too small to None *)
    
    (* {0.61,0.48,0.92,0.9,0.32,0.11,4,4,0}
     None *)

    The result is None because:

    1. the data has a number that is too small, and
    2. the definition of MaybeBind stops the pipeline aggressively using a FreeQ[_,None] test.

    Monad laws verification

    Let us convince ourselves that the current definition of MaybeBind gives a monad.

    The verification is straightforward to program and shows that the implemented Maybe monad adheres to the monad laws.

    "Monad-laws-table-Maybe"

    “Monad-laws-table-Maybe”

    Extensions with polymorphic behavior

    We can see from formulas (1) and (2) that the monad codes can be easily extended through overloading the pipeline functions.

    For example the extension of the Maybe monad to handle of Dataset objects is fairly easy and straightforward.

    Here is the formula of the Maybe monad pipeline extended with Dataset objects:

    Here is an example of a polymorphic function definition for the Maybe monad:

    MaybeFilter[filterFunc_][xs_] := Maybe@Select[xs, filterFunc[#] &];
    
    MaybeFilter[critFunc_][xs_Dataset] := Maybe@xs[Select[critFunc]];

    See [AA7] for more detailed examples of polymorphism in monadic programming with Mathematica / WL.

    A complete discussion can be found in [H3]. (The main message of [H3] is the poly-functional and polymorphic properties of monad implementations.)

    Polymorphic monads in R’s dplyr

    The R package dplyr, [R1], has implementations centered around monadic polymorphic behavior. The command pipelines based on dplyrcan work on R data frames, SQL tables, and Spark data frames without changes.

    Here is a diagram of a typical work-flow with dplyr:

    "dplyr-pipeline"

    The diagram shows how a pipeline made with dplyr can be re-run (or reused) for data stored in different data structures.

    Monad code generation

    We can see monad code definitions like the ones for Maybe as some sort of initial templates for monads that can be extended in specific ways depending on their applications. Mathematica / WL can easily provide code generation for such templates; (see [WL1]). As it was mentioned in the introduction, we do not deal with types for monads explicitly, we generate code for monads instead.

    In this section are given examples with packages that generate monad codes. The case study sections have examples of packages that utilize generated monad codes.

    Maybe monads code generation

    The package [AA2] provides a Maybe code generator that takes as an argument a prefix for the generated functions. (Monad code generation is discussed further in the section “General work-flow of monad code generation utilization”.)

    Here is an example:

    Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MaybeMonadCodeGenerator.m"]
    
    GenerateMaybeMonadCode["AnotherMaybe"]
    
    data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};
    
    AnotherMaybeUnit[data]⟹(* lift data into the monad *)
     (AnotherMaybe@Join[#, RandomInteger[8, 3]] &)⟹(* add more values *)
     AnotherMaybeEcho⟹(* display current value *)
     (AnotherMaybe @ Map[If[# < 0.4, None, #] &, #] &)(* map values that are too small to None *)
    
    (* {0.61,0.48,0.92,0.9,0.32,0.11,8,7,6}
       AnotherMaybeBind: Failure when applying: Function[AnotherMaybe[Map[Function[If[Less[Slot[1], 0.4], None, Slot[1]]], Slot[1]]]]
       None *)

    We see that we get the same result as above (None) and a message prompting failure.

    State monads code generation

    The State monad is also basic and its programming in Mathematica / WL is not that difficult. (See [AA3].)

    Here is the special version of the generic pipeline formula (1) for the State monad:

    "Monad-formula-State"

    “Monad-formula-State”

    Note that since the State monad pipeline caries both a value and a state, it is a good idea to have functions that manipulate them separately. For example, we can have functions for context modification and context retrieval. (These are done in [AA3].)

    This loads the package [AA3]:

    Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/StateMonadCodeGenerator.m"]

    This generates the State monad for the prefix “StMon”:

    GenerateStateMonadCode["StMon"]

    The following StMon pipeline code starts with a random matrix and then replaces numbers in the current pipeline value according to a threshold parameter kept in the context. Several times are invoked functions for context deposit and retrieval.

    SeedRandom[34]
    StMonUnit[RandomReal[{0, 1}, {3, 2}], <|"mark" -> "TooSmall", "threshold" -> 0.5|>]⟹
      StMonEchoValue⟹
      StMonEchoContext⟹
      StMonAddToContext["data"]⟹
      StMonEchoContext⟹
      (StMon[#1 /. (x_ /; x < #2["threshold"] :> #2["mark"]), #2] &)⟹
      StMonEchoValue⟹
      StMonRetrieveFromContext["data"]⟹
      StMonEchoValue⟹
      StMonRetrieveFromContext["mark"]⟹
      StMonEchoValue;
    
    (* value: {{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}
       context: <|mark->TooSmall,threshold->0.5|>
       context: <|mark->TooSmall,threshold->0.5,data->{{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}|>
       value: {{0.789884,0.831468},{TooSmall,0.50537},{TooSmall,TooSmall}}
       value: {{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}
       value: TooSmall *)

    Flow control in monads

    We can implement dedicated functions for governing the pipeline flow in a monad.

    Let us look at a breakdown of these kind of functions using the State monad StMon generated above.

    Optional acceptance of a function result

    A basic and simple pipeline control function is for optional acceptance of result — if failure is obtained applying f then we ignore its result (and keep the current pipeline value.)

    Here is an example with StMonOption :

    SeedRandom[34]
    StMonUnit[RandomReal[{0, 1}, 5]]⟹
     StMonEchoValue⟹
     StMonOption[If[# < 0.3, None] & /@ # &]⟹
     StMonEchoValue
    
    (* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
       value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
       StMon[{0.789884, 0.831468, 0.421298, 0.50537, 0.0375957}, <||>] *)

    Without StMonOption we get failure:

    SeedRandom[34]
    StMonUnit[RandomReal[{0, 1}, 5]]⟹
     StMonEchoValue⟹
     (If[# < 0.3, None] & /@ # &)⟹
     StMonEchoValue
    
    (* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
       StMonBind: Failure when applying: Function[Map[Function[If[Less[Slot[1], 0.3], None]], Slot[1]]]
       None *)

    Conditional execution of functions

    It is natural to want to have the ability to chose a pipeline function application based on a condition.

    This can be done with the functions StMonIfElse and StMonWhen.

    SeedRandom[34]
    StMonUnit[RandomReal[{0, 1}, 5]]⟹
     StMonEchoValue⟹
     StMonIfElse[
      Or @@ (# < 0.4 & /@ #) &,
      (Echo["A too small value is present.", "warning:"]; 
        StMon[Style[#1, Red], #2]) &,
      StMon[Style[#1, Blue], #2] &]⟹
     StMonEchoValue
    
     (* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
        warning: A too small value is present.
        value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
        StMon[{0.789884,0.831468,0.421298,0.50537,0.0375957},<||>] *)

    Remark: Using flow control functions like StMonIfElse and StMonWhen with appropriate messages is a better way of handling computations that might fail. The silent failures handling of the basic Maybe monad is convenient only in a small number of use cases.

    Iterative functions

    The last group of pipeline flow control functions we consider comprises iterative functions that provide the functionalities of Nest, NestWhile, FoldList, etc.

    In [AA3] these functionalities are provided through the function StMonIterate.

    Here is a basic example using Nest that corresponds to Nest[#+1&,1,3]:

    StMonUnit[1]⟹StMonIterate[Nest, (StMon[#1 + 1, #2]) &, 3]
    
    (* StMon[4, <||>] *)

    Consider this command that uses the full signature of NestWhileList:

    NestWhileList[# + 1 &, 1, # < 10 &, 1, 4]
    
    (* {1, 2, 3, 4, 5} *)

    Here is the corresponding StMon iteration code:

    StMonUnit[1]⟹StMonIterate[NestWhileList, (StMon[#1 + 1, #2]) &, (#[[1]] < 10) &, 1, 4]
    
    (* StMon[{1, 2, 3, 4, 5}, <||>] *)

    Here is another results accumulation example with FixedPointList :

    StMonUnit[1.]⟹
     StMonIterate[FixedPointList, (StMon[(#1 + 2/#1)/2, #2]) &]
    
    (* StMon[{1., 1.5, 1.41667, 1.41422, 1.41421, 1.41421, 1.41421}, <||>] *)

    When the functions NestList, NestWhileList, FixedPointList are used with StMonIterate their results can be stored in the context. Here is an example:

    StMonUnit[1.]⟹
     StMonIterate[FixedPointList, (StMon[(#1 + 2/#1)/2, #2]) &, "fpData"]
    
    (* StMon[{1., 1.5, 1.41667, 1.41422, 1.41421, 1.41421, 1.41421}, <|"fpData" -> {StMon[1., <||>], 
        StMon[1.5, <||>], StMon[1.41667, <||>], StMon[1.41422, <||>], StMon[1.41421, <||>], 
        StMon[1.41421, <||>], StMon[1.41421, <||>]} |>] *)

    More elaborate tests can be found in [AA8].

    Partial pipelines

    Because of the associativity law we can design pipeline flows based on functions made of “sub-pipelines.”

    fEcho = Function[{x, ct}, StMonUnit[x, ct]⟹StMonEchoValue⟹StMonEchoContext];
    
    fDIter = Function[{x, ct}, 
       StMonUnit[y^x, ct]⟹StMonIterate[FixedPointList, StMonUnit@D[#, y] &, 20]];
    
    StMonUnit[7]⟹fEcho⟹fDIter⟹fEcho;
    
    (*
      value: 7
      context: <||>
      value: {y^7,7 y^6,42 y^5,210 y^4,840 y^3,2520 y^2,5040 y,5040,0,0}
      context: <||> *)

    General work-flow of monad code generation utilization

    With the abilities to generate and utilize monad codes it is natural to consider the following work flow. (Also shown in the diagram below.)

    1. Come up with an idea that can be expressed with monadic programming.
    2. Look for suitable monad implementation.
    3. If there is no such implementation, make one (or two, or five.)
    4. Having a suitable monad implementation, generate the monad code.
    5. Implement additional pipeline functions addressing envisioned use cases.
    6. Start making pipelines for the problem domain of interest.
    7. Are the pipelines are satisfactory? If not go to 5. (Or 2.)

    "make-monads"

    Monad templates

    The template nature of the general monads can be exemplified with the group of functions in the package StateMonadCodeGenerator.m, [4].

    They are in five groups:

    1. base monad functions (unit testing, binding),
    2. display of the value and context,
    3. context manipulation (deposit, retrieval, modification),
    4. flow governing (optional new value, conditional function application, iteration),
    5. other convenience functions.

    We can say that all monad implementations will have their own versions of these groups of functions. The more specialized monads will have functions specific to their intended use. Such special monads are discussed in the case study sections.

    Software design with monadic programming

    The application of monadic programming to a particular problem domain is very similar to designing a software framework or designing and implementing a Domain Specific Language (DSL).

    The answers of the question “When to use monadic programming?” can form a large list. This section provides only a couple of general, personal viewpoints on monadic programming in software design and architecture. The principles of monadic programming can be used to build systems from scratch (like Haskell and Scala.) Here we discuss making specialized software with or within already existing systems.

    Framework design

    Software framework design is about architectural solutions that capture the commonality and variability in a problem domain in such a way that: 1) significant speed-up can be achieved when making new applications, and 2) a set of policies can be imposed on the new applications.

    The rigidness of the framework provides and supports its flexibility — the framework has a backbone of rigid parts and a set of “hot spots” where new functionalities are plugged-in.

    Usually Object-Oriented Programming (OOP) frameworks provide inversion of control — the general work-flow is already established, only parts of it are changed. (This is characterized with “leave the driving to us” and “don’t call us we will call you.”)

    The point of utilizing monadic programming is to be able to easily create different new work-flows that share certain features. (The end user is the driver, on certain rail paths.)

    In my opinion making a software framework of small to moderate size with monadic programming principles would produce a library of functions each with polymorphic behaviour that can be easily sequenced in monadic pipelines. This can be contrasted with OOP framework design in which we are more likely to end up with backbone structures that (i) are static and tree-like, and (ii) are extended or specialized by plugging-in relevant objects. (Those plugged-in objects themselves can be trees, but hopefully short ones.)

    DSL development

    Given a problem domain the general monad structure can be used to shape and guide the development of DSLs for that problem domain.

    Generally, in order to make a DSL we have to choose the language syntax and grammar. Using monadic programming the syntax and grammar commands are clear. (The monad pipelines are the commands.) What is left is “just” the choice of particular functions and their implementations.

    Another way to develop such a DSL is through a grammar of natural language commands. Generally speaking, just designing the grammar — without developing the corresponding interpreters — would be very helpful in figuring out the components at play. Monadic programming meshes very well with this approach and applying the two approaches together can be very fruitful.

    Contextual monad classification (case study)

    In this section we show an extension of the State monad into a monad aimed at machine learning classification work-flows.

    Motivation

    We want to provide a DSL for doing machine learning classification tasks that allows us:

    1. to do basic summarization and visualization of the data,
    2. to control splitting of the data into training and testing sets;
    3. to apply the built-in classifiers;
    4. to apply classifier ensembles (see [AA9] and [AA10]);
    5. to evaluate classifier performances with standard measures and
    6. ROC plots.

    Also, we want the DSL design to provide clear directions how to add (hook-up or plug-in) new functionalities.

    The package [AA4] discussed below provides such a DSL through monadic programming.

    Package and data loading

    This loads the package [AA4]:

    Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicContextualClassification.m"]

    This gets some test data (the Titanic dataset):

    dataName = "Titanic";
    ds = Dataset[Flatten@*List @@@ ExampleData[{"MachineLearning", dataName}, "Data"]];
    varNames = Flatten[List @@ ExampleData[{"MachineLearning", dataName}, "VariableDescriptions"]];
    varNames = StringReplace[varNames, "passenger" ~~ (WhitespaceCharacter ..) -> ""];
    If[dataName == "FisherIris", varNames = Most[varNames]];
    ds = ds[All, AssociationThread[varNames -> #] &];

    Monad design

    The package [AA4] provides functions for the monad ClCon — the functions implemented in [AA4] have the prefix “ClCon”.

    The classifier contexts are Association objects. The pipeline values can have the form:

    ClCon[ val, context:(_String|_Association) ]

    The ClCon specific monad functions deposit or retrieve values from the context with the keys: “trainData”, “testData”, “classifier”. The general idea is that if the current value of the pipeline cannot provide all arguments for a ClCon function, then the needed arguments are taken from the context. If that fails, then an message is issued. This is illustrated with the following pipeline with comments example.

    "ClCon-basic-example"

    The pipeline and results above demonstrate polymorphic behaviour over the classifier variable in the context: different functions are used if that variable is a ClassifierFunction object or an association of named ClassifierFunction objects.

    Note the demonstrated granularity and sequentiality of the operations coming from using a monad structure. With those kind of operations it would be easy to make interpreters for natural language DSLs.

    Another usage example

    This monadic pipeline in this example goes through several stages: data summary, classifier training, evaluation, acceptance test, and if the results are rejected a new classifier is made with a different algorithm using the same data splitting. The context keeps track of the data and its splitting. That allows the conditional classifier switch to be concisely specified.

    First let us define a function that takes a Classify method as an argument and makes a classifier and calculates performance measures.

    ClSubPipe[method_String] :=
      Function[{x, ct},
       ClConUnit[x, ct]⟹
        ClConMakeClassifier[method]⟹
        ClConEchoFunctionContext["classifier:", 
         ClassifierInformation[#["classifier"], Method] &]⟹
        ClConEchoFunctionContext["training time:", ClassifierInformation[#["classifier"], "TrainingTime"] &]⟹
        ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall"}]⟹
        ClConEchoValue⟹
        ClConEchoFunctionContext[
         ClassifierMeasurements[#["classifier"], 
         ClConToNormalClassifierData[#["testData"]], "ROCCurve"] &]
       ];

    Using the sub-pipeline function ClSubPipe we make the outlined pipeline.

    SeedRandom[12]
    res =
      ClConUnit[ds]⟹
       ClConSplitData[0.7]⟹
       ClConEchoFunctionValue["summaries:", ColumnForm[Normal[RecordsSummary /@ #]] &]⟹
       ClConEchoFunctionValue["xtabs:", 
        MatrixForm[CrossTensorate[Count == varNames[[1]] + varNames[[-1]], #]] & /@ # &]⟹
       ClSubPipe["LogisticRegression"]⟹
       (If[#1["Accuracy"] > 0.8,
          Echo["Good accuracy!", "Success:"]; ClConFail,
          Echo["Make a new classifier", "Inaccurate:"]; 
          ClConUnit[#1, #2]] &)⟹
       ClSubPipe["RandomForest"];

    "ClCon-pipeline-2-output"

    Tracing monad pipelines (case study)

    The monadic implementations in the package MonadicTracing.m, [AA5] allow tracking of the pipeline execution of functions within other monads.

    The primary reason for developing the package was the desire to have the ability to print a tabulated trace of code and comments using the usual monad pipeline notation. (I.e. without conversion to strings etc.)

    It turned out that by programming MonadicTracing.m I came up with a monad transformer; see [Wk2], [H2].

    Package loading

    This loads the package [AA5]:

    Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]

    Usage example

    This generates a Maybe monad to be used in the example (for the prefix “Perhaps”):

    GenerateMaybeMonadCode["Perhaps"]
    GenerateMaybeMonadSpecialCode["Perhaps"]

    In following example we can see that pipeline functions of the Perhaps monad are interleaved with comment strings. Producing the grid of functions and comments happens “naturally” with the monad function TraceMonadEchoGrid.

    data = RandomInteger[10, 15];
    
    TraceMonadUnit[PerhapsUnit[data]]⟹"lift to monad"⟹
      TraceMonadEchoContext⟹
      PerhapsFilter[# > 3 &]⟹"filter current value"⟹
      PerhapsEcho⟹"display current value"⟹
      PerhapsWhen[#[[3]] > 3 &, 
       PerhapsEchoFunction[Style[#, Red] &]]⟹
      (Perhaps[#/4] &)⟹
      PerhapsEcho⟹"display current value again"⟹
      TraceMonadEchoGrid[Grid[#, Alignment -> Left] &];

    Note that :

    1. the tracing is initiated by just using TraceMonadUnit;
    2. pipeline functions (actual code) and comments are interleaved;
    3. putting a comment string after a pipeline function is optional.

    Another example is the ClCon pipeline in the sub-section “Monad design” in the previous section.

    Summary

    This document presents a style of using monadic programming in Wolfram Language (Mathematica). The style has some shortcomings, but it definitely provides convenient features for day-to-day programming and in coming up with architectural designs.

    The style is based on WL’s basic language features. As a consequence it is fairly concise and produces light overhead.

    Ideally, the packages for the code generation of the basic Maybe and State monads would serve as starting points for other more general or more specialized monadic programs.

    References

    Monadic programming

    [Wk1] Wikipedia entry: Monad (functional programming), URL: https://en.wikipedia.org/wiki/Monad_(functional_programming) .

    [Wk2] Wikipedia entry: Monad transformer, URL: https://en.wikipedia.org/wiki/Monad_transformer .

    [Wk3] Wikipedia entry: Software Design Pattern, URL: https://en.wikipedia.org/wiki/Software_design_pattern .

    [H1] Haskell.org article: Monad laws, URL: https://wiki.haskell.org/Monad_laws.

    [H2] Sheng Liang, Paul Hudak, Mark Jones, “Monad transformers and modular interpreters”, (1995), Proceedings of the 22nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages. New York, NY: ACM. pp. 333[Dash]343. doi:10.1145/199448.199528.

    [H3] Philip Wadler, “The essence of functional programming”, (1992), 19’th Annual Symposium on Principles of Programming Languages, Albuquerque, New Mexico, January 1992.

    R

    [R1] Hadley Wickham et al., dplyr: A Grammar of Data Manipulation, (2014), tidyverse at GitHub, URL: https://github.com/tidyverse/dplyr . (See also, http://dplyr.tidyverse.org .)

    Mathematica / Wolfram Language

    [WL1] Leonid Shifrin, “Metaprogramming in Wolfram Language”, (2012), Mathematica StackExchange. (Also posted at Wolfram Community in 2017.) URL of the Mathematica StackExchange answer: https://mathematica.stackexchange.com/a/2352/34008 . URL of the Wolfram Community post: http://community.wolfram.com/groups/-/m/t/1121273 .

    MathematicaForPrediction

    [AA1] Anton Antonov, “Implementation of Object-Oriented Programming Design Patterns in Mathematica”, (2016) MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction.

    [AA2] Anton Antonov, Maybe monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MaybeMonadCodeGenerator.m .

    [AA3] Anton Antonov, State monad code generator Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m .

    [AA4] Anton Antonov, Monadic contextual classification Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m .

    [AA5] Anton Antonov, Monadic tracing Mathematica package, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m .

    [AA6] Anton Antonov, MathematicaForPrediction utilities, (2014), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m .

    [AA7] Anton Antonov, Simple monadic programming, (2017), MathematicaForPrediction at GitHub. (Preliminary version, 40% done.) URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/Documentation/Simple-monadic-programming.pdf .

    [AA8] Anton Antonov, Generated State Monad Mathematica unit tests, (2017), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/GeneratedStateMonadTests.m .

    [AA9] Anton Antonov, Classifier ensembles functions Mathematica package, (2016), MathematicaForPrediction at GitHub. URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m .

    [AA10] Anton Antonov, “ROC for classifier ensembles, bootstrapping, damaging, and interpolation”, (2016), MathematicaForPrediction at WordPress. URL: https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/ .

    Comparison of dimension reduction algorithms over mandala images generation

    Introduction

    This document discusses concrete algorithms for two different approaches of generation of mandala images, [1]: direct construction with graphics primitives, and use of machine learning algorithms.

    In the experiments described in this document better results were obtained with the direct algorithms. The direct algorithms were made for the Mathematica StackExchange question "Code that generates a mandala", [3].

    The main goals of this document are:

    1. to show some pretty images exploiting symmetry and multiplicity (see this album),

    2. to provide an illustrative example of comparing dimension reduction methods,

    3. to give a set-up for further discussions and investigations on mandala creation with machine learning algorithms.

    Two direct construction algorithms are given: one uses "seed" segment rotations, the other superimposing of layers of different types. The following plots show the order in which different mandala parts are created with each of the algorithms.

    "Direct-Mandala-creation-algorithms-steps"

    In this document we use several algorithms for dimension reduction applied to collections of images following the procedure described in [4,5]. We are going to show that with Non-Negative Matrix Factorization (NNMF) we can use mandalas made with the seed segment rotation algorithm to extract layer types and superimpose them to make colored mandalas. Using the same approach with Singular Value Decomposition (SVD) or Independent Component Analysis (ICA) does not produce good layers and the superimposition produces more "watered-down", less diverse mandalas.

    From a more general perspective this document compares the statistical approach of "trying to see without looking" with the "direct simulation" approach. Another perspective is the creation of "design spaces"; see [6].

    The idea of using machine learning algorithms is appealing because there is no need to make the mental effort of understanding, discerning, approximating, and programming the principles of mandala creation. We can "just" use a large collection of mandala images and generate new ones using the "internal knowledge" data of machine learning algorithms. For example, a Neural network system like Deep Dream, [2], might be made to dream of mandalas.

    Direct algorithms for mandala generation

    In this section we present two different algorithms for generating mandalas. The first sees a mandala as being generated by rotation of a "seed" segment. The second sees a mandala as being generated by different component layers. For other approaches see [3].

    The request of [3] is for generation of mandalas for coloring by hand. That is why the mandala generation algorithms are in the grayscale space. Coloring the generated mandala images is a secondary task.

    By seed segment rotations

    One way to come up with mandalas is to generate a segment and then by appropriate number of rotations to produce a mandala.

    Here is a function and an example of random segment (seed) generation:

    Clear[MakeSeedSegment]
    MakeSeedSegment[radius_, angle_, n_Integer: 10, 
       connectingFunc_: Polygon, keepGridPoints_: False] :=
      Block[{t},
       t = Table[
         Line[{radius*r*{Cos[angle], Sin[angle]}, {radius*r, 0}}], {r, 0, 1, 1/n}];
       Join[If[TrueQ[keepGridPoints], t, {}], {GrayLevel[0.25], 
         connectingFunc@RandomSample[Flatten[t /. Line[{x_, y_}] :> {x, y}, 1]]}]
       ];
    
    seed = MakeSeedSegment[10, Pi/12, 10];
    Graphics[seed, Frame -> True]
    "Mandala-seed-segment"

    This function can make a seed segment symmetric:

    Clear[MakeSymmetric]
    MakeSymmetric[seed_] := {seed, 
       GeometricTransformation[seed, ReflectionTransform[{0, 1}]]};
    
    seed = MakeSymmetric[seed];
    Graphics[seed, Frame -> True]
    "Mandala-seed-segment-symmetric"

    Using a seed we can generate mandalas with different specification signatures:

    Clear[MakeMandala]
    MakeMandala[opts : OptionsPattern[]] :=      
      MakeMandala[
       MakeSymmetric[
        MakeSeedSegment[20, Pi/12, 12, 
         RandomChoice[{Line, Polygon, BezierCurve, 
           FilledCurve[BezierCurve[#]] &}], False]], Pi/6, opts];
    
    MakeMandala[seed_, angle_?NumericQ, opts : OptionsPattern[]] :=      
      Graphics[GeometricTransformation[seed, 
        Table[RotationMatrix[a], {a, 0, 2 Pi - angle, angle}]], opts];

    This code randomly selects symmetricity and seed generation parameters (number of concentric circles, angles):

    SeedRandom[6567]
    n = 12;
    Multicolumn@
     MapThread[
      Image@If[#1,
         MakeMandala[MakeSeedSegment[10, #2, #3], #2],
         MakeMandala[
          MakeSymmetric[MakeSeedSegment[10, #2, #3, #4, False]], 2 #2]
         ] &, {RandomChoice[{False, True}, n], 
       RandomChoice[{Pi/7, Pi/8, Pi/6}, n], 
       RandomInteger[{8, 14}, n], 
       RandomChoice[{Line, Polygon, BezierCurve, 
         FilledCurve[BezierCurve[#]] &}, n]}]
    "Seed-segment-rotation-mandalas-complex-settings"

    Here is a more concise way to generate symmetric segment mandalas:

    Multicolumn[Table[Image@MakeMandala[], {12}], 5]
    "Seed-segment-rotation-mandalas-simple-settings"

    Note that with this approach the programming of the mandala coloring is not that trivial — weighted blending of colorized mandalas is the easiest thing to do. (Shown below.)

    By layer types

    This approach was given by Simon Woods in [3].

    "For this one I’ve defined three types of layer, a flower, a simple circle and a ring of small circles. You could add more for greater variety."

    The coloring approach with image blending given below did not work well for this algorithm, so I modified the original code in order to produce colored mandalas.

    ClearAll[LayerFlower, LayerDisk, LayerSpots, MandalaByLayers]
    
    LayerFlower[n_, a_, r_, colorSchemeInd_Integer] := 
      Module[{b = RandomChoice[{-1/(2 n), 0}]}, {If[
         colorSchemeInd == 0, White, 
         RandomChoice[ColorData[colorSchemeInd, "ColorList"]]], 
        Cases[ParametricPlot[
          r (a + Cos[n t])/(a + 1) {Cos[t + b Sin[2 n t]], Sin[t + b Sin[2 n t]]}, {t, 0, 2 Pi}], 
         l_Line :> FilledCurve[l], -1]}];
    
    LayerDisk[_, _, r_, colorSchemeInd_Integer] := {If[colorSchemeInd == 0, White, 
        RandomChoice[ColorData[colorSchemeInd, "ColorList"]]], 
       Disk[{0, 0}, r]};
    
    LayerSpots[n_, a_, r_, colorSchemeInd_Integer] := {If[colorSchemeInd == 0, White, 
        RandomChoice[ColorData[colorSchemeInd, "ColorList"]]], 
       Translate[Disk[{0, 0}, r a/(4 n)], r CirclePoints[n]]};
    
    MandalaByLayers[n_, m_, coloring : (False | True) : False, opts : OptionsPattern[]] := 
      Graphics[{EdgeForm[Black], White, 
        Table[RandomChoice[{3, 2, 1} -> {LayerFlower, LayerDisk, LayerSpots}][n, RandomReal[{3, 5}], i, 
           If[coloring, RandomInteger[{1, 17}], 0]]~Rotate~(Pi i/n), {i, m, 1, -1}]}, opts];

    Here are generated black-and-white mandalas.

    SeedRandom[6567]
    ImageCollage[Table[Image@MandalaByLayers[16, 20], {12}], Background -> White, ImagePadding -> 3, ImageSize -> 1200]
    "Layer-types-superimposing-BW"

    Here are some colored mandalas. (Which make me think more of Viking and Native American art than mandalas.)

    ImageCollage[Table[Image@MandalaByLayers[16, 20, True], {12}], Background -> White, ImagePadding -> 3, ImageSize -> 1200]
    "Layer-types-superimposing-colored"

    Training data

    Images by direct generation

    iSize = 400;
    
    SeedRandom[6567]
    AbsoluteTiming[
     mandalaImages = 
       Table[Image[
         MakeMandala[
          MakeSymmetric@
           MakeSeedSegment[10, Pi/12, 12, RandomChoice[{Polygon, FilledCurve[BezierCurve[#]] &}]], Pi/6], 
         ImageSize -> {iSize, iSize}, ColorSpace -> "Grayscale"], {300}];
     ]
    
    (* {39.31, Null} *)
    
    ImageCollage[ColorNegate /@ RandomSample[mandalaImages, 12], Background -> White, ImagePadding -> 3, ImageSize -> 400]
    "mandalaImages-sample"

    External image data

    See the section "Using World Wide Web images".

    Direct blending

    The most interesting results are obtained with the image blending procedure coded below over mandala images generated with the seed segment rotation algorithm.

    SeedRandom[3488]
    directBlendingImages = Table[
       RemoveBackground@
        ImageAdjust[
         Blend[Colorize[#, 
             ColorFunction -> 
              RandomChoice[{"IslandColors", "FruitPunchColors", 
                "AvocadoColors", "Rainbow"}]] & /@ 
           RandomChoice[mandalaImages, 4], RandomReal[1, 4]]], {36}];
    
    ImageCollage[directBlendingImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]

    "directBlendingImages-3488-36"

    Dimension reduction algorithms application

    In this section we are going to apply the dimension reduction algorithms Singular Value Decomposition (SVD), Independent Component Analysis (ICA), and Non-Negative Matrix Factorization (NNMF) to a linear vector space representation (a matrix) of an image dataset. In the next section we use the bases generated by those algorithms to make mandala images.
    We are going to use the packages [7,8] for ICA and NNMF respectively.

    
    Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/IndependentComponentAnalysis.m"]
    Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/NonNegativeMatrixFactorization.m"]
    

    Linear vector space representation

    The linear vector space representation of the images is simple — each image is flattened to a vector (row-wise), and the image vectors are put into a matrix.

    mandalaMat = Flatten@*ImageData@*ColorNegate /@ mandalaImages;
    Dimensions[mandalaMat]
    
    (* {300, 160000} *)

    Re-factoring and basis images

    The following code re-factors the images matrix with SVD, ICA, and NNMF and extracts the basis images.

    AbsoluteTiming[
     svdRes = SingularValueDecomposition[mandalaMat, 20];
    ]
    (* {5.1123, Null} *)
    
    svdBasisImages = Map[ImageAdjust@Image@Partition[#, iSize] &, Transpose@svdRes[[3]]];
    
    AbsoluteTiming[
     icaRes = 
       IndependentComponentAnalysis[Transpose[mandalaMat], 20, 
        PrecisionGoal -> 4, "MaxSteps" -> 100];
    ]
    (* {23.41, Null} *)
    
    icaBasisImages = Map[ImageAdjust@Image@Partition[#, iSize] &, Transpose[icaRes[[1]]]];
    
    SeedRandom[452992]
    AbsoluteTiming[
     nnmfRes = 
       GDCLS[mandalaMat, 20, PrecisionGoal -> 4, 
        "MaxSteps" -> 20, "RegularizationParameter" -> 0.1];
     ]
    (* {233.209, Null} *)
    
    nnmfBasisImages = Map[ImageAdjust@Image@Partition[#, iSize] &, nnmfRes[[2]]];

    Bases

    Let us visualize the bases derived with the matrix factorization methods.

    Grid[{{"SVD", "ICA", "NNMF"},
          Map[ImageCollage[#, Automatic, {400, 500}, 
            Background -> LightBlue, ImagePadding -> 5, ImageSize -> 350] &, 
          {svdBasisImages, icaBasisImages, nnmfBasisImages}]
         }, Dividers -> All]
    "Mandala-SVD-ICA-NNMF-bases-20"

    "Mandala-SVD-ICA-NNMF-bases-20"

    Here are some observations for the bases.

    1. The SVD basis has an average mandala image as its first vector and the other vectors are "differences" to be added to that first vector.

    2. The SVD and ICA bases are structured similarly. That is because ICA and SVD are both based on orthogonality — ICA factorization uses an orthogonality criteria based on Gaussian noise properties (which is more relaxed than SVD’s standard orthogonality criteria.)

    3. As expected, the NNMF basis images have black background because of the enforced non-negativity. (Black corresponds to 0, white to 1.)

    4. Compared to the SVD and ICA bases the images of the NNMF basis are structured in a radial manner. This can be demonstrated using image binarization.

    Grid[{{"SVD", "ICA", "NNMF"}, Map[ImageCollage[Binarize[#, 0.5] & /@ #, Automatic, {400, 500}, Background -> LightBlue, ImagePadding -> 5, ImageSize -> 350] &, {svdBasisImages, icaBasisImages, nnmfBasisImages}] }, Dividers -> All]
    "Mandala-SVD-ICA-NNMF-bases-binarized-0.5-20"

    We can see that binarizing of the NNMF basis images shows them as mandala layers. In other words, using NNMF we can convert the mandalas of the seed segment rotation algorithm into mandalas generated by an algorithm that superimposes layers of different types.

    Blending with image bases samples

    In this section we just show different blending images using the SVD, ICA, and NNMF bases.

    Blending function definition

    ClearAll[MandalaImageBlending]
    Options[MandalaImageBlending] = {"BaseImage" -> {}, "BaseImageWeight" -> Automatic, "PostBlendingFunction" -> (RemoveBackground@*ImageAdjust)};
    MandalaImageBlending[basisImages_, nSample_Integer: 4, n_Integer: 12, opts : OptionsPattern[]] :=      
      Block[{baseImage, baseImageWeight, postBlendingFunc, sImgs, sImgWeights},
       baseImage = OptionValue["BaseImage"];
       baseImageWeight = OptionValue["BaseImageWeight"];
       postBlendingFunc = OptionValue["PostBlendingFunction"];
       Table[(
         sImgs = 
          Flatten@Join[{baseImage}, RandomSample[basisImages, nSample]];
         If[NumericQ[baseImageWeight] && ImageQ[baseImage],
          sImgWeights = 
           Join[{baseImageWeight}, RandomReal[1, Length[sImgs] - 1]],
          sImgWeights = RandomReal[1, Length[sImgs]]
          ];
         postBlendingFunc@
          Blend[Colorize[#, 
              DeleteCases[{opts}, ("BaseImage" -> _) | ("BaseImageWeight" -> _) | ("PostBlendingFunction" -> _)],               
              ColorFunction -> 
               RandomChoice[{"IslandColors", "FruitPunchColors", 
                 "AvocadoColors", "Rainbow"}]] & /@ sImgs, 
           sImgWeights]), {n}]
       ];

    SVD image basis blending

    SeedRandom[17643]
    svdBlendedImages = MandalaImageBlending[Rest@svdBasisImages, 4, 24];
    ImageCollage[svdBlendedImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]

    "svdBlendedImages-17643-24"

    SeedRandom[17643]
    svdBlendedImages = MandalaImageBlending[Rest@svdBasisImages, 4, 24, "BaseImage" -> First[svdBasisImages], "BaseImageWeight" -> 0.5];
    ImageCollage[svdBlendedImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]

    "svdBlendedImages-baseImage-17643-24"

    ICA image basis blending

    SeedRandom[17643]
    icaBlendedImages = MandalaImageBlending[Rest[icaBasisImages], 4, 36, "BaseImage" -> First[icaBasisImages], "BaseImageWeight" -> Automatic];
    ImageCollage[icaBlendedImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]

    "icaBlendedImages-17643-36"

    NNMF image basis blending

    SeedRandom[17643]
    nnmfBlendedImages = MandalaImageBlending[nnmfBasisImages, 4, 36];
    ImageCollage[nnmfBlendedImages, Background -> White, ImagePadding -> 3, ImageSize -> 1200]

    "nnmfBlendedImages-17643-36"

    Using World Wide Web images

    A natural question to ask is:

    What would be the outcomes of the above procedures to mandala images found in the World Wide Web (WWW) ?

    Those WWW images are most likely man made or curated.

    The short answer is that the results are not that good. Better results might be obtained using a larger set of WWW images (than just 100 in the experiment results shown below.)

    Here is a sample from the WWW mandala images:

    "wwwMandalaImages-sample-6

    Here are the results obtained with NNMF basis:

    "www-nnmfBlendedImages-12"

    Future plans

    My other motivation for writing this document is to set up a basis for further investigations and discussions on the following topics.

    1. Having a large image database of "real world", human made mandalas.

    2. Utilization of Neural Network algorithms to mandala creation.

    3. Utilization of Cellular Automata to mandala generation.

    4. Investigate mandala morphing and animations.

    5. Making a domain specific language of specifications for mandala creation and modification.

    The idea of using machine learning algorithms for mandala image generation was further supported by an image classifier that recognizes fairly well (suitably normalized) mandala images obtained in different ways:

    "Mandalas-classifer-measurements-matrix"

    References

    [1] Wikipedia entry: Mandala, https://en.wikipedia.org/wiki/Mandala .

    [2] Wikipedia entry: DeepDream, https://en.wikipedia.org/wiki/DeepDream .

    [3] "Code that generates a mandala", Mathematica StackExchange, http://mathematica.stackexchange.com/q/136974 .

    [4] Anton Antonov, "Comparison of PCA and NNMF over image de-noising", (2016), MathematicaForPrediction at WordPress blog. URL: https://mathematicaforprediction.wordpress.com/2016/05/07/comparison-of-pca-and-nnmf-over-image-de-noising/ .

    [5] Anton Antonov, "Handwritten digits recognition by matrix factorization", (2016), MathematicaForPrediction at WordPress blog. URL: https://mathematicaforprediction.wordpress.com/2016/11/12/handwritten-digits-recognition-by-matrix-factorization/ .

    [6] Chris Carlson, "Social Exploration of Design Spaces: A Proposal", (2016), Wolfram Technology Conference 2016. URL: http://wac .36f4.edgecastcdn.net/0036F4/pub/www.wolfram.com/technology-conference/2016/SocialExplorationOfDesignSpaces.nb , YouTube: https://www.youtube.com/watch?v=YK2523nfcms .

    [7] Anton Antonov, Independent Component Analysis Mathematica package, (2016), source code at MathematicaForPrediction at GitHub, package IndependentComponentAnalysis.m .

    [8] Anton Antonov, Implementation of the Non-Negative Matrix Factorization algorithm in Mathematica, (2013), source code at MathematicaForPrediction at GitHub, package NonNegativeMatrixFactorization.m.

    Tries with frequencies in Java

    Introduction

    This blog post describes the installation and use in Mathematica of Tries with frequencies [1] implemented in Java [2] through a corresponding Mathematica package [3].

    Prefix tree or Trie, [6], is a tree data structure that stores a set of "words" that consist of "characters" — each element can be seen as a key to itself. The article [1] and packages [2,3,4] extend that data structure to have additional data (frequencies) associated with each key.

    The packages [2,3] work with lists of strings only. The package [4] can work with more general data but it is much slower.

    The main motivation to create the package [3] was to bring the fast Trie functions implementations of [2] into Mathematica in order to prototype, implement, and experiment with different text processing algorithms. (Like, inductive grammar parsers generation and entity name recognition.) The speed of combining [2] and [3] is evaluated in the section "Performance tests" below.

    Set-up

    This following directory path has to have the jar file "TriesWithFrequencies.jar".

    $JavaTriesWithFrequenciesPath = 
      "/Users/antonov/MathFiles/MathematicaForPrediction/Java/TriesWithFrequencies";
    FileExistsQ[
     FileNameJoin[{$JavaTriesWithFrequenciesPath, "TriesWithFrequencies.jar"}]]
    
    (* True *)

    For more details see the explanations in the README file in the GitHub directory of [2].

    The following directory is expected to have the Mathematica package [3].

    dirName = "/Users/antonov/MathFiles/MathematicaForPrediction";
    FileExistsQ[FileNameJoin[{dirName, "JavaTriesWithFrequencies.m"}]]
    
    (* True *)
    
    AppendTo[$Path, dirName];
    Needs["JavaTriesWithFrequencies`"]

    This commands installs Java (via JLink`) and loads the necessary Java libraries.

    JavaTrieInstall[$JavaTriesWithFrequenciesPath]

    Basic examples

    For brevity the basic examples are not included in this blog post. Here is album of images that shows the "JavaTrie.*" commands with their effects:

    "JavaTrieExample" .

    More detailed explanations can be found in the Markdown document, [7]:

    Next, we are going to look into performance evaluation examples (also given in [7].)

    Membership of words

    Assume we want find the words of "Hamlet" that are not in the book "Origin of Species". This section shows that the Java trie creation and query times for this task are quite small.

    Read words

    The following code reads the words in the texts. We get 33000 words from "Hamlet" and 151000 words from "Origin of Species".

    hWords =
      Block[{words},
       words = 
        StringSplit[
         ExampleData[{"Text", "Hamlet"}], {Whitespace, 
          PunctuationCharacter}];
       words = Select[ToLowerCase[words], StringLength[#] > 0 &]
       ];
    Length[hWords]
    
    (* 32832 *)
    
    osWords =
      Block[{words},
       words = 
        StringSplit[
         ExampleData[{"Text", "OriginOfSpecies"}], {Whitespace, 
          PunctuationCharacter}];
       words = Select[ToLowerCase[words], StringLength[#] > 0 &]
       ];
    Length[osWords]
    
    (* 151205 *)

    Membership

    First we create trie with "Origin of species" words:

    AbsoluteTiming[
     jOStr = JavaTrieCreateBySplit[osWords];
    ]
    
    (* {0.682531, Null} *)

    Sanity check — the "Origin of species" words are in the trie:

    AbsoluteTiming[
     And @@ JavaObjectToExpression[
       JavaTrieContains[jOStr, Characters /@ osWords]]
    ]
    
    (* {1.32224, True} *)

    Membership of "Hamlet" words into "Origin of Species":

    AbsoluteTiming[
     res = JavaObjectToExpression[
        JavaTrieContains[jOStr, Characters /@ hWords]];
    ]
    
    (* {0.265307, Null} *)

    Tallies of belonging:

    Tally[res]
    
    (* {{True, 24924}, {False, 7908}} *)

    Sample of words from "Hamlet" that do not belong to "Origin of Species":

    RandomSample[Pick[hWords, Not /@ res], 30]
    
    (* {"rosencrantz", "your", "mar", "airy", "rub", "honesty", \
    "ambassadors", "oph", "returns", "pale", "virtue", "laertes", \
    "villain", "ham", "earnest", "trail", "unhand", "quit", "your", \
    "your", "fishmonger", "groaning", "your", "wake", "thou", "liest", \
    "polonius", "upshot", "drowned", "grosser"} *)

    Common words sample:

    RandomSample[Pick[hWords, res], 30]
    
    (* {"as", "indeed", "it", "with", "wild", "will", "to", "good", "so", \
    "dirt", "the", "come", "not", "or", "but", "the", "why", "my", "to", \
    "he", "and", "you", "it", "to", "potent", "said", "the", "are", \
    "question", "soft"} *)

    Statistics

    The node counts statistics calculation is fast:

    AbsoluteTiming[
     JavaTrieNodeCounts[jOStr]
    ]
    
    (* {0.002344, <|"total" -> 20723, "internal" -> 15484, "leaves" -> 5239|>} *)

    The node counts statistics computation after shrinking is comparably fast :

    AbsoluteTiming[
     JavaTrieNodeCounts[JavaTrieShrink[jOStr]]
    ]
    
    (* {0.00539, <|"total" -> 8918,  "internal" -> 3679, "leaves" -> 5239|>} *)

    The conversion of a large trie to JSON and computing statistics over the obtained tree is reasonably fast:

    AbsoluteTiming[
     res = JavaTrieToJSON[jOStr];
    ]
    
    (* {0.557221, Null} *)
    
    AbsoluteTiming[
     Quantile[
      Cases[res, ("value" -> v_) :> v, \[Infinity]], 
      Range[0, 1, 0.1]]
    ]
    
    (* {0.019644, {1., 1., 1., 1., 2., 3., 5., 9., 17., 42., 151205.}} *)

    Dictionary infixes

    Get all words from a dictionary:

    allWords =  DictionaryLookup["*"];
    allWords // Length
    
    (* 92518 *)

    Trie creation and shrinking:

    AbsoluteTiming[
     jDTrie = JavaTrieCreateBySplit[allWords];
     jDShTrie = JavaTrieShrink[jDTrie];
    ]
    
    (* {0.30508, Null} *)

    JSON form extraction:

    AbsoluteTiming[
     jsonRes = JavaTrieToJSON[jDShTrie];
    ]
    
    (* {3.85955, Null} *)

    Here are the node statistics of the original and shrunk tries:

    "Orginal-trie-vs-Shrunk-trie-Node-Counts"

    Find the infixes that have more than three characters and appear more than 10 times:

    Multicolumn[#, 4] &@
     Select[SortBy[
       Tally[Cases[
         jsonRes, ("key" -> v_) :> v, Infinity]], -#[[-1]] &], StringLength[#[[1]]] > 3 && #[[2]] > 10 &]
    "Long-infixes-in-shrunk-dictionary-trie"

    Unit tests

    Many of example shown in this document have corresponding tests in the file JavaTriesWithFrequencies-Unit-Tests.wlt hosted at GitHub.

    tr = TestReport[
      dirName <> "/UnitTests/JavaTriesWithFrequencies-Unit-Tests.wlt"]
    "TestReport"

    References

    [1] Anton Antonov, "Tries with frequencies for data mining", (2013), MathematicaForPrediction at WordPress blog. URL: https://mathematicaforprediction.wordpress.com/2013/12/06/tries-with-frequencies-for-data-mining/ .

    [2] Anton Antonov, Tries with frequencies in Java, (2017), source code at MathematicaForPrediction at GitHub, project Java/TriesWithFrequencies.

    [3] Anton Antonov, Java tries with frequencies Mathematica package, (2017), source code at MathematicaForPrediction at GitHub, package JavaTriesWithFrequencies.m .

    [4] Anton Antonov, Tries with frequencies Mathematica package, (2013), source code at MathematicaForPrediction at GitHub, package TriesWithFrequencies.m .

    [5] Anton Antonov, Java tries with frequencies Mathematica unit tests, (2017), source code at MathematicaForPrediction at GitHub, unit tests file JavaTriesWithFrequencies-Unit-Tests.wlt .

    [6] Wikipedia, Trie, https://en.wikipedia.org/wiki/Trie .

    [7] Anton Antonov, "Tries with frequencies in Java", (2017), MathematicaForPrediction at GitHub.

    Text analysis of Trump tweets

    Introduction

    This post is to proclaim the MathematicaVsR at GitHub project “Text analysis of Trump tweets” in which we compare Mathematica and R over text analyses of Twitter messages made by Donald Trump (and his staff) before the USA president elections in 2016.

    The project follows and extends the exposition and analysis of the R-based blog post "Text analysis of Trump’s tweets confirms he writes only the (angrier) Android half" by David Robinson at VarianceExplained.org; see [1].

    The blog post [1] links to several sources that claim that during the election campaign Donald Trump tweeted from his Android phone and his campaign staff tweeted from an iPhone. The blog post [1] examines this hypothesis in a quantitative way (using various R packages.)

    The hypothesis in question is well summarized with the tweet:

    Every non-hyperbolic tweet is from iPhone (his staff).
    Every hyperbolic tweet is from Android (from him). pic.twitter.com/GWr6D8h5ed
    — Todd Vaziri (@tvaziri) August 6, 2016

    This conjecture is fairly well supported by the following mosaic plots, [2]:

    TextAnalysisOfTrumpTweets-iPhone-MosaicPlot-Sentiment-Device TextAnalysisOfTrumpTweets-iPhone-MosaicPlot-Device-Weekday-Sentiment

    We can see the that Twitter messages from iPhone are much more likely to be neutral, and the ones from Android are much more polarized. As Christian Rudder (one of the founders of OkCupid, a dating website) explains in the chapter "Death by a Thousand Mehs" of the book "Dataclysm", [3], having a polarizing image (online persona) is as a very good strategy to engage online audience:

    […] And the effect isn’t small-being highly polarizing will in fact get you about 70 percent more messages. That means variance allows you to effectively jump several "leagues" up in the dating pecking order – […]

    (The mosaic plots above were made for the Mathematica-part of this project. Mosaic plots and weekday tags are not used in [1].)

    Concrete steps

    The Mathematica-part of this project does not follow closely the blog post [1]. After the ingestion of the data provided in [1], the Mathematica-part applies alternative algorithms to support and extend the analysis in [1].

    The sections in the R-part notebook correspond to some — not all — of the sections in the Mathematica-part.

    The following list of steps is for the Mathematica-part.

    1. Data ingestion
      • The blog post [1] shows how to do in R the ingestion of Twitter data of Donald Trump messages.

      • That can be done in Mathematica too using the built-in function ServiceConnect, but that is not necessary since [1] provides a link to the ingested data used [1]:
        load(url("http://varianceexplained.org/files/trump_tweets_df.rda&quot;))

      • Which leads to the ingesting of an R data frame in the Mathematica-part using RLink.

    2. Adding tags

      • We have to extract device tags for the messages — each message is associated with one of the tags "Android", "iPad", or "iPhone".

      • Using the message time-stamps each message is associated with time tags corresponding to the creation time month, hour, weekday, etc.

      • Here is summary of the data at this stage:

      "trumpTweetsTbl-Summary"

    3. Time series and time related distributions

      • We can make several types of time series plots for general insight and to support the main conjecture.

      • Here is a Mathematica made plot for the same statistic computed in [1] that shows differences in tweet posting behavior:

      "TimeSeries"

      • Here are distributions plots of tweets per weekday:

      "ViolinPlots"

    4. Classification into sentiments and Facebook topics

      • Using the built-in classifiers of Mathematica each tweet message is associated with a sentiment tag and a Facebook topic tag.

      • In [1] the results of this step are derived in several stages.

      • Here is a mosaic plot for conditional probabilities of devices, topics, and sentiments:

      "Device-Topic-Sentiment-MosaicPlot"

    5. Device-word association rules

      • Using Association rule learning device tags are associated with words in the tweets.

      • In the Mathematica-part these associations rules are not needed for the sentiment analysis (because of the built-in classifiers.)

      • The association rule mining is done mostly to support and extend the text analysis in [1] and, of course, for comparison purposes.

      • Here is an example of derived association rules together with their most important measures:

      "iPhone-Association-Rules"

    In [1] the sentiments are derived from computed device-word associations, so in [1] the order of steps is 1-2-3-5-4. In Mathematica we do not need the steps 3 and 5 in order to get the sentiments in the 4th step.

    Comparison

    Using Mathematica for sentiment analysis is much more direct because of the built-in classifiers.

    The R-based blog post [1] uses heavily the "pipeline" operator %>% which is kind of a recent addition to R (and it is both fashionable and convenient to use it.) In Mathematica the related operators are Postfix (//), Prefix (@), Infix (~~), Composition (@*), and RightComposition (/*).

    Making the time series plots with the R package "ggplot2" requires making special data frames. I am inclined to think that the Mathematica plotting of time series is more direct, but for this task the data wrangling codes in Mathematica and R are fairly comparable.

    Generally speaking, the R package "arules" — used in this project for Associations rule learning — is somewhat awkward to use:

    • it is data frame centric, does not work directly with lists of lists, and

    • requires the use of factors.

    The Apriori implementation in “arules” is much faster than the one in “AprioriAlgorithm.m” — “arules” uses a more efficient algorithm implemented in C.

    References

    [1] David Robinson, "Text analysis of Trump’s tweets confirms he writes only the (angrier) Android half", (2016), VarianceExplained.org.

    [2] Anton Antonov, "Mosaic plots for data visualization", (2014), MathematicaForPrediction at WordPress.

    [3] Christian Rudder, Dataclysm, Crown, 2014. ASIN: B00J1IQUX8 .

    Pareto principle adherence examples

    This post (document) is made to provide examples of the Pareto principle manifestation in different datasets.

    The Pareto principle is an interesting law that manifests in many contexts. It is also known as "Pareto law", "the law of significant few", "the 80-20 rule".

    For example:

    • "80% of the land is owned by 20% of the population",

    • "10% of all lakes contain 90% of all lake water."

    For extensive discussion and studied examples see the Wikipedia entry "Pareto principle", [4].

    It is a good idea to see for which parts of the analyzed data the Pareto principle manifests. Testing for the Pareto principle is usually simple. For example, assume that we have the GDP of all countries:

    countries = CountryData["Countries"];
    gdps = {CountryData[#, "Name"], CountryData[#, "GDP"]} & /@ countries;
    gdps = DeleteCases[gdps, {_, _Missing}] /. Quantity[x_, _] :> x;
    
    Grid[{RecordsSummary[gdps, {"country", "GDP"}]}, Alignment -> Top, Dividers -> All]

    GDPUnsorted1

    In order to test for the manifestation of the Pareto principle we have to (i) sort the GDP values in descending order, (ii) find the cumulative sums, (iii) normalize the obtained vector by the sum of all values, and (iv) plot the result. These steps are done with the following two commands:

    t = Reverse@Sort@gdps[[All, 2]];
    ListPlot[Accumulate[t]/Total[t], PlotRange -> All, GridLines -> {{0.2} Length[t], {0.8}}, Frame -> True]

    GDPPlot1

    In this document we are going to use the special function ParetoLawPlot defined in the next section and the package [1]. Most of the examples use data that is internally accessible within Mathematica. Several external data examples are considered.

    See the package [1] for the function RecordsSummary. See the source file [2] for R functions that facilitate the plotting of Pareto principle graphs. See the package [3] for the outlier detection functions used below.

    Definitions

    This simple function makes a list plot that would help assessing the manifestation of the Pareto principle. It takes the same options as ListPlot.

    Clear[ParetoLawPlot]
    Options[ParetoLawPlot] = Options[ListPlot];
    ParetoLawPlot[dataVec : {_?NumberQ ..}, opts : OptionsPattern[]] := ParetoLawPlot[{Tooltip[dataVec, 1]}, opts];
    ParetoLawPlot[dataVecs : {{_?NumberQ ..} ..}, opts : OptionsPattern[]] := ParetoLawPlot[MapThread[Tooltip, {dataVecs, Range[Length[dataVecs]]}], opts];
    ParetoLawPlot[dataVecs : {Tooltip[{_?NumberQ ..}, _] ..}, opts : OptionsPattern[]] :=
      Block[{t, mc = 0.5},
       t = Map[Tooltip[(Accumulate[#]/Total[#] &)[SortBy[#[[1]], -# &]], #[[2]]] &, dataVecs];
       ListPlot[t, opts, PlotRange -> All, GridLines -> {Length[t[[1, 1]]] Range[0.1, mc, 0.1], {0.8}}, Frame -> True, FrameTicks -> {{Automatic, Automatic}, {Automatic, Table[{Length[t[[1, 1]]] c, ToString[Round[100 c]] <> "%"}, {c, Range[0.1, mc, 0.1]}]}}]
      ];

    This function is useful for coloring the outliers in the list plots.

    ClearAll[ColorPlotOutliers]
    ColorPlotOutliers[] := # /. {Point[ps_] :> {Point[ps], Red, Point[ps[[OutlierPosition[ps[[All, 2]]]]]]}} &;
    ColorPlotOutliers[oid_] := # /. {Point[ps_] :> {Point[ps], Red, Point[ps[[OutlierPosition[ps[[All, 2]], oid]]]]}} &;

    These definitions can be also obtained by loading the packages MathematicaForPredictionUtilities.m and OutlierIdentifiers.m; see [1,3].

    Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MathematicaForPredictionUtilities.m"]
    Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/OutlierIdentifiers.m"]

    Units

    Below we are going to use the metric system of units. (If preferred we can easily switch to the imperial system.)

    $UnitSystem = "Metric";(*"Imperial"*)

    CountryData

    We are going to consider a typical Pareto principle example — weatlh of income distribution.

    GDP

    This code find the Gross Domestic Product (GDP) of different countries:

    gdps = {CountryData[#, "Name"], CountryData[#, "GDP"]} & /@CountryData["Countries"];
    gdps = DeleteCases[gdps, {_, _Missing}] /. Quantity[x_, _] :> x;

    The corresponding Pareto plot (note the default grid lines) shows that 10% of countries have 90% of the wealth:

    ParetoLawPlot[gdps[[All, 2]], ImageSize -> 400]

    GDPPlot2

    Here is the log histogram of the GDP values.

    Histogram[Log10@gdps[[All, 2]], 20, PlotRange -> All]

    GDPHistogram1

    The following code shows the log plot of countries GDP values and the found outliers.

    Manipulate[
     DynamicModule[{data = Transpose[{Range[Length[gdps]], Sort[gdps[[All, 2]]]}], pos},
      pos = OutlierPosition[modFunc@data[[All, 2]], tb@*opar];
      If[Length[pos] > 0,
       ListLogPlot[{data, data[[pos]]}, PlotRange -> All, PlotTheme -> "Detailed", FrameLabel -> {"Index", "GDP"}, PlotLegends -> SwatchLegend[{"All", "Outliers"}]],
       ListLogPlot[{data}, PlotRange -> All, PlotTheme -> "Detailed", FrameLabel -> {"Index", "GDP"}, PlotLegends -> SwatchLegend[{"All", "Outliers"}]]
      ]
     ],
     {{opar, SPLUSQuartileIdentifierParameters, "outliers detector"}, {HampelIdentifierParameters, SPLUSQuartileIdentifierParameters}},
     {{tb, TopOutliers, "bottom|top"}, {BottomOutliers, TopOutliers}},
     {{modFunc, Identity, "data modifier function"}, {Identity, Log}}
    ]

    Outliers1

    This table gives the values for countries with highest GDP.

    Block[{data = gdps[[OutlierPosition[gdps[[All, 2]], TopOutliers@*SPLUSQuartileIdentifierParameters]]]},
     Row[Riffle[#, " "]] &@Map[Grid[#, Dividers -> All, Alignment -> {Left, "."}] &, Partition[SortBy[data, -#[[-1]] &], Floor[Length[data]/3]]]
    ]

    HighestGDP1

    Population

    Similar data retrieval and plots can be made for countries populations.

    pops = {CountryData[#, "Name"], CountryData[#, "Population"]} & /@CountryData["Countries"];
    unit = QuantityUnit[pops[[All, 2]]][[1]];
    pops = DeleteCases[pops, {_, _Missing}] /. Quantity[x_, _] :> x;

    In the following Pareto plot we can see that 15% of countries have 80% of the total population:

    ParetoLawPlot[pops[[All, 2]], PlotLabel -> Row[{"Population", ", ", unit}]]

    PopPlot1

    Here are the countries with most people:

    Block[{data = pops[[OutlierPosition[pops[[All, 2]], TopOutliers@*SPLUSQuartileIdentifierParameters]]]},
     Row[Riffle[#, " "]] &@Map[Grid[#, Dividers -> All, Alignment -> {Left, "."}] &, Partition[SortBy[data, -#[[-1]] &], Floor[Length[data]/3]]]
    ]

    HighestPop1

    Area

    We can also see that the Pareto principle holds for the countries areas:

    areas = {CountryData[#, "Name"], CountryData[#, "Area"]} & /@CountryData["Countries"];
    areas = DeleteCases[areas, {_, _Missing}] /. Quantity[x_, _] :> x;
    ParetoLawPlot[areas[[All, 2]]]

    AreaPlot1

    Block[{data = areas[[OutlierPosition[areas[[All, 2]], TopOutliers@*SPLUSQuartileIdentifierParameters]]]},
     Row[Riffle[#, " "]] &@Map[Grid[#, Dividers -> All, Alignment -> {Left, "."}] &, Partition[SortBy[data, -#[[-1]] &], Floor[Length[data]/3]]]
    ]

    HighestArea1

    Time series-wise

    An interesting diagram is to plot together the curves of GDP changes for different countries. We can see China and Poland have had rapid growth.

    res = Table[
        (t = CountryData[countryName, {{"GDP"}, {1970, 2015}}];
         t = Reverse@Sort[t["Path"][[All, 2]] /. Quantity[x_, _] :> x];
         Tooltip[t, countryName])
        , {countryName, {"USA", "China", "Poland", "Germany", "France", "Denmark"}}];
    
    ParetoLawPlot[res, PlotRange -> All, Joined -> True, PlotLegends -> res[[All, 2]]]

    GDPGrowth1

    Manipulate

    This dynamic interface can be used for a given country to compare (i) the GDP evolution in time and (ii) the corresponding Pareto plot.

    Manipulate[
     DynamicModule[{ts, t},
      ts = CountryData[countryName, {{"GDP"}, {1970, 2015}}];
      t = Reverse@Sort[ts["Path"][[All, 2]] /. Quantity[x_, _] :> x];
      Grid[{{"Date list plot of GDP values", "GDP Pareto plot"}, {DateListPlot[ts, ImageSize -> Medium],
         ParetoLawPlot[t, ImageSize -> Medium]}}]
     ], {countryName, {"USA", "China", "Poland", "Germany", "France", "Denmark"}}]

    GDPGrowth2

    Country flag colors

    The following code demonstrates that the colors of the pixels in country flags also adhere to the Pareto principle.

    flags = CountryData[#, "Flag"] & /@ CountryData["Countries"];
    
    flags[[1 ;; 12]]

    Flags1

    ids = ImageData /@ flags;
    
    pixels = Apply[Join, Flatten[ids, 1]];
    
    Clear[ToBinFunc]
    ToBinFunc[x_] := Evaluate[Piecewise[MapIndexed[{#2[[1]], #1[[1]] < x <= #1[[2]]} &, Partition[Range[0, 1, 0.1], 2, 1]]]];
    
    pixelsInt = Transpose@Table[Map[ToBinFunc, pixels[[All, i]]], {i, 1, 3}];
    
    pixelsIntTally = SortBy[Tally[pixelsInt], -#[[-1]] &];
    
    ParetoLawPlot[pixelsIntTally[[All, 2]]]

    FlagsPlot1

    TunnelData

    Loking at lengths in the tunnel data we can see the manifestation of an exaggerated Pareto principle.

    tunnelLengths = TunnelData[All, {"Name", "Length"}];
    tunnelLengths // Length
    
    (* 1552 *)
    
    t = Reverse[Sort[DeleteMissing[tunnelLengths[[All, -1]]] /. Quantity[x_, _] :> x]];
    
    ParetoLawPlot[t]

    TunnelsPlot1

    Here is the logarithmic histogram of the lengths:

    Histogram[Log10@t, PlotRange -> All, PlotTheme -> "Detailed"]

    TunnelsHist1

    LakeData

    The following code gathers the data and make the Pareto plots surface areas, volumes, and fish catch values for lakes. We can that the lakes volumes show exaggerated Pareto principle.

    lakeAreas = LakeData[All, "SurfaceArea"];
    lakeVolumes = LakeData[All, "Volume"];
    lakeFishCatch = LakeData[All, "CommercialFishCatch"];
    
    data = {lakeAreas, lakeVolumes, lakeFishCatch};
    t = N@Map[DeleteMissing, data] /. Quantity[x_, _] :> x;
    
    opts = {PlotRange -> All, ImageSize -> Medium}; MapThread[ParetoLawPlot[#1, PlotLabel -> Row[{#2, ", ", #3}], opts] &, {t, {"Lake area", "Lake volume", "Commercial fish catch"}, DeleteMissing[#][[1, 2]] & /@ data}]

    LakesPlot1

    City data

    One of the examples given in [5] is that the city areas obey the Power Law. Since the Pareto principle is a kind of Power Law we can confirm that observation using Pareto principle plots.

    The following grid of Pareto principle plots is for areas and population sizes of cities in selected states of USA.

    res = Table[
        (cities = CityData[{All, stateName, "USA"}];
         t = Transpose@Outer[CityData, cities, {"Area", "Population"}];
         t = Map[DeleteMissing[#] /. Quantity[x_, _] :> x &, t, {1}];
         ParetoLawPlot[MapThread[Tooltip, {t, {"Area", "Population"}}], PlotLabel -> stateName, ImageSize -> 250])
        , {stateName, {"Alabama", "California", "Florida", "Georgia", "Illinois", "Iowa", "Kentucky", "Ohio", "Tennessee"}}];
    
    Legended[Grid[ArrayReshape[res, {3, 3}]], SwatchLegend[Cases[res[[1]], _RGBColor, Infinity], {"Area", "Population"}]]

    CitiesPlot1

    Movie ratings in MovieLens datasets

    Looking into the MovieLens 20M dataset, [6], we can see that the Pareto princple holds for (1) most rated movies and (2) most active users. We can also see the manifestation of an exaggerated Pareto law — 90% of all ratings are for 10% of the movies.

    "MovieLens20M-MDensity-and-Pareto-plots"

    "MovieLens20M-MDensity-and-Pareto-plots"

    The following plot taken from the blog post "PIN analysis", [7], shows that the four digit passwords people use adhere to the Pareto principle: the first 20% of (the unique) most frequently used passwords correspond to the 70% of all passwords use.

    ColorNegate[Import["http://www.datagenetics.com/blog/september32012/c.png"]]

    Cumulative-4-Digit-Password-Usages-ColorNegated

    References

    [1] Anton Antonov, "MathematicaForPrediction utilities", (2014), source code MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction, package MathematicaForPredictionUtilities.m.

    [2] Anton Antonov, Pareto principle functions in R, source code MathematicaForPrediction at GitHub, https://github.com/antononcube/MathematicaForPrediction, source code file ParetoLawFunctions.R .

    [3] Anton Antonov, Implementation of one dimensional outlier identifying algorithms in Mathematica, (2013), MathematicaForPrediction at GitHub, URL: https://github.com/antononcube/MathematicaForPrediction/blob/master/OutlierIdentifiers.m .

    [4] Wikipedia entry, "Pareto principle", URL: https://en.wikipedia.org/wiki/Pareto_principle .

    [5] Wikipedia entry, "Power law", URL: https://en.wikipedia.org/wiki/Power_law .

    [6] GroupLens Research, MovieLens 20M Dataset, (2015).

    [7] "PIN analysis", (2012), DataGenetics.

    Handwritten digits recognition by matrix factorization

    Introduction

    This MathematicaVsR at GitHub project is for comparing Mathematica and R for the tasks of classifier creation, execution, and evaluation using the MNIST database of images of handwritten digits.

    Here are the bases built with two different classifiers:

    • Singular Value Decomposition (SVD)

    SVD-basis-for-5

    • Non-Negative Matrix Factorization (NNMF)

    NNMF-basis-for-5

    Here are the confusion matrices of the two classifiers:

    • SVD

    SVD-confusion-matrix

    • NNMF

    NNMF-confusion-matrix

    The blog post "Classification of handwritten digits" (published 2013) has a related more elaborated discussion over a much smaller database of handwritten digits.

    Concrete steps

    The concrete steps taken in scripts and documents of this project follow.

    1. Ingest the binary data files into arrays that can be visualized as digit images.
    • We have two sets: 60,000 training images and 10,000 testing images.
    1. Make a linear vector space representation of the images by simple unfolding.

    2. For each digit find the corresponding representation matrix and factorize it.

    3. Store the matrix factorization results in a suitable data structure. (These results comprise the classifier training.)

    • One of the matrix factors is seen as a new basis.
    1. For a given test image (and its linear vector space representation) find the basis that approximates it best. The corresponding digit is the classifier prediction for the given test image.

    2. Evaluate the classifier(s) over all test images and compute accuracy, F-Scores, and other measures.

    Scripts

    There are scripts going through the steps listed above:

    Documents

    The following documents give expositions that are suitable for reading and following of steps and corresponding results.

    Observations

    Ingestion

    I figured out first in R how to ingest the data in the binary files of the MNIST database. There were at least several online resources (blog posts, GitHub repositories) that discuss the MNIST binary files ingestion.

    After that making the corresponding code in Mathematica was easy.

    Classification results

    Same in Mathematica and R for for SVD and NNMF. (As expected.)

    NNMF

    NNMF classifiers use the MathematicaForPrediction at GitHub implementations: NonNegativeMatrixFactorization.m and NonNegativeMatrixFactorization.R.

    Parallel computations

    Both Mathematica and R have relatively simple set-up of parallel computations.

    Graphics

    It was not very straightforward to come up in R with visualizations for MNIST images. The Mathematica visualization is much more flexible when it comes to plot labeling.

    Going further

    Comparison with other classifiers

    Using Mathematica’s built-in classifiers it was easy to compare the SVD and NNMF classifiers with neural network ones and others. (The SVD and NNMF are much faster to built and they bring comparable precision.)

    It would be nice to repeat that in R using one or several of the neural network classifiers provided by Google, Microsoft, H2O, Baidu, etc.

    Classifier ensembles

    Another possible extension is to use classifier ensembles and Receiver Operation Characteristic (ROC) to create better classifiers. (Both in Mathematica and R.)

    Importance of variables

    Using classifier agnostic importance of variables procedure we can figure out :

    • which NNMF basis vectors (images) are most important for the classification precision,

    • which image rows or columns are most important for each digit, or similarly

    • which image squares of a, say, 4×4 image grid are most important.