Skip to content

Activities Blog For Court Hearings

(without any proprietary source code)

As a consequence of my decade-long struggle in a massively invalidating Mass. Family Court, I had no choice but to learn “the law” behind complex RICO litigations.

Always fascinated by dynamic, “life-like” (or multi-threaded) interactions, I immediately imagined the patterns behind the multitude of claims of my RICO class action complaint as the ideal application for one of my favorite “visualization” software packages, Cytoscape.

A class action RICO lawsuit is certainly not about our biological proteins or genes, but simulating a causal network of “interstate commerce” certainly has direct “metabolic” parallels. Through my years of data-science work, I also fell in love with Jupyter, the ultimate “lab” for someone who only has a laptop.

The new JupyterLab is extensible with React “visualization” widgets, meaning that one can freely experiment with a python back-end feeding a state-of-the-art front-end, customized with Typescript.

With my continued inability to reliably run demanding NLP tasks, my only choice seems to be to seek out the sweet spot of my trusty old laptop’s capabilities: its great “visualization.”

Therefore, I will switch gears, and for the next three months, I will focus on combining the above-listed tech into a “” (or, more precisely, a JupyterLab extension). My objective will be to interactively simulate, evaluate, and validate my (and possibly also generic) civil RICO claims.

My ongoing weekly blog will reflect this necessary switch of focus. Follow the new project here.


Intense legal work.


Completion of the legal work (for now).

The conversion of the D3 simulation and visualization sources is also largely completed with initial and specifically targeted examples to follow.


Intense legal and survival work.


The focus has now shifted back to the higher-level data views and their GraphQL representations… TBD

The public websites have also been refreshed and updated due to the crushing intensity of the Mass. courts to trap me and silence me for good.


Intense legal work.


The core of the simulation and visualization engine comes from D3. D3 is a quite large Javascript library. It comes with all the idiosyncratic patterns that Typescript has already simplified (e.g., lots of function-scoped vars instead of block-scoped let and consts).

While top level type declarations are available for D3, there also seem to be strong “impedances” when using ECMAScript modules. Without significantly “hacking” the sources and effectively eliding all types, D3 becomes an immediate roadblock for us.

As the objective is to use the Typescript compiler to generate the target modules for the entire codebase (i.e., without using Babel or other translators), we have decided to simplify and “modularize” the original sources by also propagating type annotations to the internals.

This work follows the strategy taken with the previous packages, and improves uniformity of shared patterns and the significant shrinking of the codebase.


The codebase continues to grow. And the lack of ECMAScript modules support started to significantly affect the simplicity of the build processes. We decided to initiate a “detour” to focus on the uniformity of the fundamental libraries.

This detour is largely completed for the actual library sources, however, converting the unit tests is a much larger undertaking and will take more time without affecting the prototype’s functionality.


We are at a point where we need to focus on the graph algorithms of the simulations. The Python Networkx package is our gold standard or reference.

To interactively visualize the “forces” acting on our graphs, we are working on a simplified Typescript version. Ideas have been collected from (the old) dagre, from nx itself, and from orb.

The representation will be finalized in the next two or three weeks (see our simnet package)…


There are a variety of options to use on the server side of the simulation framework. An obvious choice would be using Express. However, we can achieve more compact and satisfying results using Koa. As the objective is to arrive at a uniform and fully typed implementation, a quick conversion of the Koa codebase to Typescript occurred.

This quote summarizes the most attractive aspect of Koa: “… with async functions we can achieve ’true’ middleware. Contrasting Connect’s implementation which simply passes control through a series of functions until one returns, Koa invokes ‘downstream’, then control flows back ‘upstream’.”

The next step for a functioning simulation server is the GraphQL middleware. To start with, we borrow the straightforward logic from koa-graphql. We will later expand on this “lego-like” architecture for our actual simulation needs as well.

As part of building up our infrastructure, a tedious effort has been ongoing to incorporate the canned unit tests of the various components. As more and more tests pass, our confidence to strip the unnecessary “complexities” grows. At this point, we have paused using the nx framewowk.


Without engaging in any dynamic vs. static typing arguments, the most significant reduction in the cognitive load of source code comes from typing all names. Specifically, typing allows for short variable names, e.g., the quintessential i for an index, k for a key, v for a value, etc. Proofing and rigorous mechanized validation of code are also a given, thus boosting “cognitive confidence.”

To support the higher-level data views of our model consistently and uniformly, the architectures proposed by the excellent immer and redux libraries have been our guide and inspiration.

Support for ECMAScript modules in the published npm packages has been lagging, however, and to minimize the potential “impedance mismatch” in typings, the essential parts of the open source libraries have been cloned (see license) and then uniformly shaped in our new rooted package.

As far as types are concerned, Typescript turns out to be an outstanding “meta programming” environment. Sorting through the redux toolkit and the rxjs libraries, one can learn a great deal about how to “compute” with types elegantly and efficiently.


The converted components are now being tested with the React Testing Library. It is a tedious but necessary process. Cognitive Loads in Programming is a valid point of view when quick understanding and/or visual proofing of code is the objective.

The “patterns” in the code become the main focus and individual names, especially long ones, significantly reduce the available “working memory.” And after a while, the differentiating factor between functions becomes their “shape.”

To reduce the cognitive load, the “shape” and patterns in the code are simplified to their most essential features. For example, functions usually convert an x (with possible additional ...xs) into a y. The short name of the function and the types of its signature and variables are the differentiating qualities.

When sorting through hundreds of functions, it is crucial to be able to quickly zero in on the main “actors” and uniformity of the shapes allows one to filter out the unnecessary detail.

Note: I have been using the VS Code environment and I particularly appreciate its code folding functionality. Therefore, I prefer to merge a component and all its ancillary “helpers” into one large file instead of a myriad of “one default function” files.


Both data representation and visualization are priorities, and the framework was designed to support their parallel development. Nevertheless, the React frontend, see reboot, started to exhibit the notorious “inconsistency confusion” inherent in the rapidly evolving Javascript world.

Initial validations for now supported “ECMAScript modules” returned positive results, including for the Jest testing framework. As reboot is a type-safe, i.e., fully Typescript, component collection for React, the usual Javascript workflows to lint and to “Babelize” the code are unnecessary complications.

Specifically targeting only the latest browsers and Node revisions, reboot was therefore converted to be compiled directly by tsc without any additional steps involved, while also uniformly supporting the ECMAScript modules standard. Furthermore, this decision allowed to seamlessly combine the various divergent React libraries and simplify the type signatures.

Unit testing of the new reboot with Jest is still ongoing, however, the now streamlined front-end will be helping the backend development as well. Specifically, the simplified workflow and the one-step compilation, aided by the preserved full typing context, have created a uniform and predictable environment.


The limited initial database schema has been enhanced to support a complete network of our Node entities and Edge relationships.

The Size quantities of both (essential in any visual simulation) are also encoded. In addition to Edge relationships, Pack groupings of Nodes are also allowed and uniformly represented.

Hierarchical structures of diagrams can have a consistent design through the introduction of Port nodes to disambiguate Edges crossing Pack boundaries between two Nodes.

Here is the simplified Prisma schema. Both Packs and Ports are otherwise simple Nodes and they reuse the primary keys. Moreover, Ports are meaningful only in the context of their Pack:

generator client {
  provider = "prisma-client-js"

datasource db {
  provider = "sqlite"
  url      = env("DATABASE_URL")

model Name {
  id    Int     @id @default(autoincrement())
  val   String
  nodes Node[]
  edges Edge[]

model Size {
  id    Int     @id @default(autoincrement())
  val   Int     @default(0)
  min   Int?
  max   Int?
  nodes Node[]
  edges Edge[]

model Node {
  id     Int    @id @default(autoincrement())
  name   Name   @relation(fields: [nameId], references: [id])
  nameId Int
  size   Size   @relation(fields: [sizeId], references: [id])
  sizeId Int
  ins    Edge[] @relation("out")
  outs   Edge[] @relation("in")
  asPack Pack?  @relation("asPack")
  packs  Pack[]
  asPort Port?  @relation("asPort")

model Edge {
  id     Int   @id @default(autoincrement())
  name   Name? @relation(fields: [nameId], references: [id])
  nameId Int?
  size   Size  @relation(fields: [sizeId], references: [id])
  sizeId Int
  in     Node  @relation("in", fields: [inId], references: [id])
  inId   Int
  out    Node  @relation("out", fields: [outId], references: [id])
  outId  Int

model Pack {
  id     Int    @id
  asNode Node   @relation("asPack", fields: [id], references: [id])
  nodes  Node[]
  ports  Port[]

model Port {
  id     Int  @id
  asNode Node @relation("asPort", fields: [id], references: [id])
  pack   Pack @relation(fields: [packId], references: [id])
  packId Int  @unique

This is the initial ER diagram of the schema:


The design of the schema allows to efficiently represent or encode an arbitrary network such as the one depicted below:



The React front-end components have been completed (see the reboot package) and the test suite is being converted to Jest. The package is a simplified and streamlined version of the open-source React-Boostrap framework. Work to add a selection of the needed Lumino “widgets” has also started. However, solving stylistic mismatches will not be a priority at this time. The intention is to use the Bootstrap scss settings/mechanisms as much as possible.

Tying in the SQLite database and GraphQL query support for simulation data has also started (see the simsrc package). The initial schema will have a limited number of essential elements, just enough to bring up the framework and run the build and test scripts. Nevertheless, the schema will continue to be updated as functionality is added.

Work has also started on diagram visualization (see the simnet package). Initially, the already existing algorithm implementations are adapted to the new framework. The needed Cytoscape (and D3) functionality will be added as a next step. Once again, the focus is on the framework and the workflows, for the time being. Only very basic functionality is supported in this phase.


The focus is on the framework. We will attempt to combine JupyterLite (and WebAssembly) to eliminate the Python interpreter and run a (simple) full simulation entirely in the browser. To this effect, we made the following design choices and decisions regarding the framework.

For the front end, React components, routing and hooks will be used with Typescript at the lowest level. The “create react app” templates gave us the initial skeleton. For stylistic details, and a uniform look, the decision to use the higher-level Boostrap was a fairly obvious one. Our reboot package encapsulates these decisions into a uniformly extensible set of UI components. As JupyterLab uses the Lumino UI components, an immediate need to expand and adapt reboot presented itself.

For the backend, we will use a SQLite database. Both Python and Typescript (via perhaps a Prisma ORM) will be able to store and query the unified simulation data. GraphQL will provide a uniform “view” of the data for all the visualization elements of the app. The emphasis will be on establishing a seamless connection to the data-driven D3 and the Cytoscape “graphing” components.

While abstractions on many levels properly split the architecture into several independent packages, it is still desirable to use a monorepo for development. Many build systems are available, however, NX seems the most appropriate, specifically because its plugin architecture can be expanded to also support the JupyterLab (and Python) parts of the project.

For testing of the Javascript parts of the app we chose Jest. For the Python parts, we follow the framework proposed by the JupyterLab extensions.

The initial layout of the project is experiencing many changes and adjustments this week. There is a fair amount of “trial and error” involved when piecing the different parts together. However, convergence to a stable workflow is also fast established, once the most critical parameters get dialed in.


The focus is on the concepts. A RICO case first needs to establish a legitimate “enterprise” and then show how “culpable persons” hijack the operations for nefarious purposes.

The following diagram provides an abstract overview of such a context for simulation purposes. It reflects a hierarchical entity-relationship model, where the entities are circles and the relationships are simple lines. They both have various attributes provided as parameters. Arrows represent the actions that give the system simulatable “dynamism.”


Dynamic visualizations of complex interactions are significantly more flexible than simple static diagrams. The following diagram attempts to convey the possibility of the existence of a “hub of hubs” in RICO contexts. Pointing to the filed RICO class action complaint, HUB 1..N could represent the number of Family Courts partaking in the “federal reimbursement” program allegedly coordinated through the DOR’s Child Support Enforcement Division (or HUB OF HUBS). A SPOKE represents an actual case.


As a result of the actions, the following changes would be visualized first: (1) the width of the edges (intensity), (2) the size of the circles (profit), and (3) the number of edges or circles (control). The following diagram highlights the causal sequence of (a) the identification of cases with “profiteering potential” (allegedly 10% of all cases), (b) the fabrication of “high conflicts” (through massive invalidations), and then (c) the resulting high intensity of actions (including PREDICATE ACTS). The achieved objective of maximizing “federal reimbursements” (and REINVESTMENTS) is also shown through the now wider feedback arrows.


In addition to the patterns of committed predicate acts and the reinvestments of resulting incomes, additional ACQUISITION and EXPANSION of “controls” are simulated through the increased number of AGENTs brought in (or used by more and more cases). The AGENT 1..N are the GALs (4 in the concrete case), the therapists and doctors (21), and the attorneys (29).

The not just allowed but also stereotypically encouraged obscene profiteering (~$1M in the concrete case) of these “trusted” agents is simulated by proportionally inflated circles.


The last introductory diagram depicts the necessary CONSPIRACY element of RICO cases. In the concrete case, even layman parties were seemingly forcefully bullied to comply (e.g., serve filings) and to coordinate with each other to ensure the success of the “to silence and enslave” orders.


With the framework and the overall simulation objective defined (i.e., to visualize the dynamic connectedness of a variable number of circles with changing sizes and adapting widths of the connections), the next step will be to create the software framework for an initial much-simplified simulation in the browser.

Archive of prior blogs