The most interesting Class of Computer Languages

Posted on April 2, 2018 by Rick Jelliffe

In the previous blog  (XML as a canary in the mine: can Intel IPSC help stagnant C get its mojo back?), I mentioned three classes of languages that are thriving, plus one that I suggest is not. But that leaves out what I think is the most interesting class of language, and indeed the one that I think Schematron belongs to.

The four languages classes I suggested were

  • “I wanna be LISP when I grow up”
  • “The Children of Simula”
  • “Everything is a…”
  • “Don’t tell me no”

Apart from a misc class for toys and jokes and experiments, I think there is one major class missing, and I think it is the most important. It is also, I think, the class that Schematron fits into:

  • People First. Or “Man is the measure of all things” perhaps.

That is a little trite. What I mean is languages that are designed some with particular human characteristics (or, at least, some theory of them) as the motivation, use case and ‘editorial’ principle of the language design.

The golden age for this class was the late 1960s:

  • Logo:  the constructivist theories of childhood education in the 60s and 70s lead to Seymour Papert’s logo language with its turtles.
  • NLS: Douglas Englebart took Whorf’s hypothesis and decided that ‘the state of our current technology controls our ability to manipulate information, and that fact in turn will control our ability to develop new, improved technologies. He thus set himself to the revolutionary task of developing computer-based technologies for manipulating information directly, and also to improve individual and group processes for knowledge-work.’
  • SmallTalk: At its heart, SmallTalk like Logo spings out of a cohesive unified theory of vision about human behaviour: as embodied in Alan Kay’s DynaBook.  Kay’s early history of Smalltalk specifically mentions an influential lecture by Marvin Minsky on ConstuctionismDavi, Englbert , and ARPA’s agenda of human-computer symbiosis as guiding him.  (Of course, the technologies that come out of any theory have independent lives and may take of on different tacks: superficially Smalltalk clearly fits as a “child of Simula” because of its adoption of Simula 67’s message passing, but it also included a strong influence from Logo (Alan Kay famously describing SmallTalk as being for “children of all ages”) including some syntax and turtle graphics. But I think those are details.)

These ideas have been incredibly potent.  Kay’s prediction of the notebook computer, calling it a notebook computer, and predicting it would be workable in the 1990s, is uncanny.  But these systems and languages are not merely ideas of Human Computer Interfaces or HX and so on: that misses their entire point.  They don’t relate to how people do things, but what people do.

And this is where Schematron comes in.  The central thing in Schematron is not a model of how people do things (though there is a lot of attention to it: a flat structure, an efficient query language, variables, and so on) but what they do: they need to understand: they need to be tell and be told about patterns, and they need to tell and be told in native language that is comprehensible to what their experience (i.e. in terms that can be presented at the interface and the ‘semantics’, not necessarily in terms of the data structures buried deep in some executing process.)

My favourite analogy for program execution is that it is a kind of  breathing: breathe in our function gets the information it needs from us, breathe out it gives the information we need back to us.  Our current systems are great at taking information in, but miserly at providing it back.  But the starting and end points are humans, and technologies that provide no way of customizing the information for the particular humans are, in my opinion, a form of technologically-forced  (and therefore socially forced) disablement that entrenches a technological priest-class, if you know what I mean.

Schematrons use of XPath’s is now widespread, and people sometimes congratulate me for the idea, but I think the number of times it has been separately invented (to be charitable) is a sign it was kinda obvious, and it had antecedents anyway.  But that is not what I wish Schematron’s legacy would be: what I would prefer to see is more computer systems and languages allowing their diagnostic and interactions to be customised, not just in messages but also in what triggers them:

Schematron’s strength is not only that it lets you report things in terms that humans are interested in, but that it lets you select (using phases) which messages humans will be interested in different times. And, in most cases, as a value-add process, it allows new patterns to be added to a working system as needed, or enhanced as new issues arise.

Systems should have extensible user-oriented messaging that allow deeply buried information to be surfaced to users as the need emerges.

I know that there has been a lot of attention to “separation of concerns” in technology for the last two decades: injection, aspect-oriented programming and so on often give better diagnostics as an example use-case.  But it should not be a cool thing that the we shoe-horn into our technology, it should be the central organizing principle: there is no computing that does not need to communicate intensely with humans at some stage in its lifecycle, even if that is just development time.