Modeling Judicial Activism
todo: full update …
Psychological Non-Profiles Of American Bar “Frequent Fliers”
We have outlined our overall objective. In this group of blogs, we set out to introduce the conceptual framework for our efforts.
We present a high-level, yet still easily simulatable view of our problem domain. We also define key abstractions to provide an unambiguous foundation for our subsequent work. And we clarify mechanisms to numerically analyze, or just to simply measure, progress and results in a reproducible fashion. In the end, we also draw conclusions and offer usable “packaged” solutions.
We have outlined our overall objective and also have considered just the right tools to use to achieve it. In this group of blogs, we introduce a flexible, scalable and easily testable “proxy”-model that we can use to drive the development of our deep-learning modules. Our problem domain is clearly nebulous. Without firm and trustworthy guidance, we could easily lose direction.
As we are attempting to teach our computer to “understand” first, we reach for the ultimate and undoubtedly universal symbolic language: simple, elementary mathematics. To this effect, we have already introduced the problem of training first-grader’s arithmetic in a previous blog.
With the below presented new blogs, we greatly expand on that first, exploratory attempt. There are numerous recent results in NLP (natural language processing in AI) that approach or even surpass measured levels of human comprehension. We provide a framework of modules that reproduce the ideas, schemes, and results presented in those latest published research papers.
As the inherent symbolism of our “proxy” model is universally understood, meaning that it is fully deterministic and predictable, synthesizing a virtually infinite and arbitrarily precise training-sample base is well within our reach. With purpose-built training data sets, we then build on the published state-of-the-art results, measurably fine-tuning them.