I stumbled upon this $1.8 million mechanical clock featuring a massive time-eating grasshopper made its debut at the University of Cambridge Friday, and famed cosmologist Stephen Hawking was on site to introduce the strange and provocative timepiece. I can’t even remember what I was searching for, but it’s one of those great moments of flying half-blind through the intertubes.Oh, and this is my first Digg post as well. I mean to blog a lot more stuff I find interesting, but never take the hassle to log into Blogger. This will make it easier, so … uh, I’ll see you again soon then?
Update: I’ve added an embedded version of the slides at the bottom of the post; my cool animations and lots of fonts are wrong, but hey, you can read it at least. 🙂
Not to put too much sugar in your otherwise fine brew of tea, but being at TMRA 2008 this year was one of the most fantastic experiences I’ve had so far. Not only did I catch up with some old friends, I met some new ones I know I’ll stay in touch with. So much smart and easy-going folks gathered in one place … I’m surprised it didn’t disintegrate in a puff of logic as that there really must be some cosmic law against it. Although, I see the TED conferences still churning out good stuff, so it must be allowed. And yes, I do equate TMRA with TED; it was that great.
This year I was invited to hold the opening keynote speach, which I called “You’re all crazy – subjectivelly speaking“, a romp on the Topic Maps community, a plea to remember epistemology in all things data modeling, and the message that being “subject-centric” is not a technical feat; it’s about social processes and agreement (or, at least, rough understanding of eachother).
I used a few cheap interactive ploys to hold the audiences attention, with making them audibly disagree or agree with certain assertions I made up on the screen. It was very effectice as raising the collective awareness to the issues I was trying to point out, and especially helpful when I needed to point out that there are some things we all disagree with. And not only that, but things we should disagree with.I think people in general thought it was a good speach, and the feedback was great, so thanks to all for that.
I’d like to thank Lars Marius Garshol and Lutz Maicher for inviting and encouraging me, Patrick Durusau, Jack Park (you need a website or blog, mate!) and Robert Barta for just being who you are, and every one else for making me once again believe so strongly that the Topic Maps community is the best thing since recursive properties and frames theory!
I’m sure I’ll write more on what went down at TMRA 2008, but right now I need to make porridge for my kids. Later.
Oops, I totally forgot to mention to the world that I’m the intro keynote speaker at the TMRA 2008 conference (one of two yearly Topic Maps conferences each year) in Leipzig next week (15-17 October). My talk is titled “We’re all crazy – subjectively speaking” and will contain at least one bad joke, two pretty good ones, some philosophical ranting and hopefully lots of community building. I really, really hope to see you there; find me, say hello, let’s have tea and discuss whether my two jokes really were good or not.
The big question is, how did I forget to tell you about this? I’ll let you know that in a few days time or so.
Lately I’ve been talking with librarians again. I left their den about 8 months ago and went a bit cool after that, needing some fresh air and to distance myself a bit from everything in that world. But, as I said, I’ve been lured back again by my own stupid notion to save humanity from itself through the channels the library world offers.
As much as I’m a fanboy of the library world, I’m also quite critical to library world thinking, the collective direction its heading and the way they del with probably their biggest challenge ever; their own survival when the book turns digital.
Today I’ll rant a bit about a piece of technology that often is hailed as being the library worlds ticket into the modern techie world, a piece of the future solution, albeit with a few minor worts that could be sorted out. I don’t agree; I think MARCXML is the plague, and I’m here to tell you why. First, here’s how Library of Congress describes it;
framework for working with MARC data in a XML environment
First of all; framework? Framework suggests something more than a mere format, and yes, there’s an XSLT sheet or two there that could convert MARCXML to HTML or somesuch. That’s not a framework, that’s a format with a few conversion scripts. Framework suggests tools I can use to get some juice, which is nowhere in sight.
Anyway, let’s move on to the 8 main design goals or considerations, with my comments;
1. Simple and Flexible MARC XML Schema
The core of the MARC XML framework is a simple XML schema which contains MARC data. This base schema output can be used where full MARC records are needed or act as a “bus” to enable MARC data records to go through further transformations such as toDublin Core and/or processes such as validation. The MARC XML schema will not need to be edited to reflect minor changes to MARC21. The schema retains the semantics of MARC.
All control fields, including the leader are treated as a data string. Fields are treated as elements with the tag as an attribute and indicators treated as attributes. Subfields are treated as subelements with the subfield code as an attribute.
Oh, it’s simple alright, in the same sense that a frog that sits in a pot of cold water that’s slowly getting hotter to the boiling-point won’t hop out to save himself, attributed to very simple neuron- and nerve-control over time (meaning, they’re great at short-time tasks, but sucks if the time stretches out a bit). We’re talking about mechanisms that are so simple you wonder how they didn’t get outed in the evolution of things.
Let’s start with “All control fields, including the leader are treated as a data string.” Here’s a quick example;
<leader>01142cam 2200301 a 4500</leader>
<controlfield tag="001"> 92005291 </controlfield>
<controlfield tag="008">920219s1993 caua j 000 0 eng </controlfield>
Not sure you can see it straight away, but they’ve here got reliance on whitespace being preserved in a format that had as a goal to get rid of reliance on whitespace. How’s that for a good start? I’m not sure how many times this has bit me, as pretty much any and all XML tools out there will be whitespace-agnostic by default (meaning, they’ll often reduce it). In order to use MARCXML properly you have to change the whitespace options in pretty much all your tools, if they allow you to.
Next up, if you go to lengths to create an XML schema you should already be aware that semantic meta data becomes part of your names and fields (and I’ll get back to this point a lot, really). Sure it’s a quick and dirty way to get your XML chops started, but is it wise to do this?
<datafield tag="245" ind1="1" ind2="0">
<subfield code="a">Arithmetic /</subfield>
I’ll translate what this does for you;
The MARC tag 245 means “title statement”, and the code “a” means, uh, title. This perticular madness comes from the culture of MARC itself which I’ll rant about some other time (and have in the past), so I’ll try to stick to the pure XML part of it ;
What were you thinking? That 245 is easier to remember than “title”? Hardly. Perhaps the international side is more convincing, that 245 is easier to remember for those who wants a title in Norwegian (“tittel”)? I seriously can’t think of any other format that does it this way, and it doesn’t seem to have stopped the success of other formats in the world. No, this particular thing has all to do with the fact that MARCXML isn’t as much XML as it is MARC; it’s really MARC with a bad hairdo, showing a thinking that as long as we can just claim it has some affiliation with XML then we’re hip and cool and we’re drinking the new techie XML kool-aid.
And this is the by far biggest problem with MARCXML; it thinks it is XML, but it really isn’t, which leads to all sorts of unfortunate situations, like ;
- Librarians are fooled into thinking their meta data is ready for an increasingly XMLish world
- Librarians think they can throw XML tools and programmers at it with ease
- Librarians think they get all the XML goodies and benefits
Let’s run through these;
Librarians are fooled into thinking their meta data is ready for an increasingly XMLish world
There’s not much these days that hasn’t got some anchoring in XML technology. I don’t need to go into details to all the XML technology used to even write and publish this little blog post. But when your MARCXML isn’t real XML, all the XML technology in the world is rendered useless for you.
Let me try to clarify this as simply as I can, through the use of XPath (an XML query language used pretty much anywhere there is XML technology). Here’s what I would write if the XML is real;
And here is what I have to do with MARCXML;
We humans have a good sense of structure. Our brains are great at categorization, we do it all the time, break things into category prototypes and derivatives to gather some kind of meaning. A tree-structure is the closest and easiest structure that binds humans and computers together, in the sense that trees are easy for a computer to work with, and easy for a human to understand. (We humans have a natural knack for prototypes and graphs [not the presentation slide kind] that I’ve talked about earlier, which we shouldn’t misinterpret here)
With these faux but useful tree-structures comes mediation between man and computer, a way to advance us further. Take note, because this is an understated and overlooked benefit of XML over any binary (or XML wannabe) format out there. And none of these benefits can you find in MARCXML because there’s only two levels involved; field and sub-field. it’s, in fact, rather flat and with non-semantic names. Can you get any further from the reasons XML was created?
Librarians think they can throw XML tools and programmers at it with ease
No you can’t. Your XML is bad, and XML tools and programmers are going to struggle with your XML. They’ll waste most of their time trying to figure out why the hell someone came up with this evil way of making your brain melt. Well, obviously, if your brain melts, it’s evil, but there is something so anti-XML about the way MARCXML was designed I’m starting to wonder.
There’s probably a ton of tools out there that deals great with XML, but not a single tool (at least in the mainstream) that has ever heard of MARCXML, and even when you throw the MARCXML Schema at them it does them little to no good. You still need domain experts to do anything with it, you still need special knowledge to move around it, and you get absolutely nothing for free in the lack of typed data and semantically rich markup.
Librarians think they get all the XML goodies and benefits
XML comes with a host of good stuff, like xml:id and xml:idrefs attributes that lots of tools understand (including XSLT), in-build language support, extensibility through namespaces, mixed content models, character encoding rules and guarantees, Unicode (for the most part), and when you think of all the XML technologies out there who already adhere and use these benefits to create a complete development universe, who’s missing out on all of this?
2. Lossless Conversion of MARC to XML
3. Roundtripability from XML back to MARC
Both of these are the same; we’re not using any of the goodness of XML, we’re pretty much MARC in a small XML wrapper, so we can easily convert back and forth from MARC and MARCXML. But conversions between XML schemas isn’t in scope, so as long as you’re working in your own little non-shared universe you’re good to go, but life sucks if you dare step out of it.
4. Data Presentation
Once MARC data has been converted to XML, data presentation is possible by writing a XML stylesheet to select the MARC elements to be displayed and to apply the appropriate markup.
This must be part of that “framework” they’re talking about but, um, you can present MARC elements and records with or without XML, and converting it into something else in the first place denotes that you can do “stuff” with it. This point is mere fluff.
5. MARC Editing
Some single or batch updates such as adding, updating, or deleting a field to a MARC record can be accomplished with simple XML transformations
Ugh, more fluff. This is basically saying “you can do stuff with it. Do it yourself.”
6. Data Conversion
Most data conversions can be written as XML transformations. For more complex transformations of the data, software tools which read MARC XML can be written.
And yet more fluff, saying the same “you can do stuff with it. Do it yourself.”
7. Validation of MARC data
Validation with this schema is accomplished via a software tool. This software, external to the schema, will provide three possible levels of validation:
* Basic XML validation according to the MARC XML Schema
* Validation of MARC21 tagging (field and subfield)
* Validation of MARC record content, e.g., coded values, dates, and times.
Now it’s getting crazy. First, “basic validation according to MARC XML Schema” means you can make sure that the XML document hasn’t got more than 5 elements, the right set of very few attributes, and that’s it. Basically, the advantage you get here is to make sure that the crappy structure of MARCXML is preserved and valid. Goody.
Secondly, validation of tagging doesn’t exists! What they really mean is that the formatting in the tagging attributes are according to certain character-based rules, that the type (which is extremely loose) is correct. Tagging, you may ask. No, not tagging (which would be useful), but the MARC tags which comes in the absolute number of 999 and are, of course, all numbers. And the validation doesn’t even adhere to the type-based system the tags themselves denote. Incredible, ain’t it?
Third, the bragging of “Validation of MARC record content” is pure nonsense and doesn’t exists unless, you guessed it, made it yourself or found someone else’s code. Good luck with all that.
By using XML as the structure for MARC records, users of the MARC in the XML framework can more easily write their own tools to consume, manipulate, and convert MARC data.
Finally, the biggest bullshit statement of all, the one that basically says “now it’s in XML; everything will be easy from here on in.”
This last section gets its own headline;
What really happens
Seriously, have the people involved in MARCXML any expertise in XML? I know this is a bold and somewhat insulting statement. I can understand why MARCXML became what it is, because it’s the first and simplest step one can take in getting anything into XML. The claims made about it, though, does not hold up to scrutiny, and in fact is outright bullshitting you into thinking MARCXML should even be considered to be a part of your development tool-chest. It should not.
The whole idea of XML is to have your meta data be the markup, and the data be, uh, data. When we have complex titles, here’s what it should look like;
<title>Arithmetic <responsibility>Carl Sandburg ; illustrated as an anamorphic adventure by Ted Rand.</responsibility></title>
But even this isn’t good enough; we need typed data values, so that we can verify that what we put in can be used for something we know about, and this is glaringly absent from MARCXML. They probably thought that the problem was too hard, we’ll deal with it later, but we are much later now, and nothing has changed. It’s luring poor innocent librarians into thinking they’re XML savvy, having catalogers think it solves some kind of meta data exchange problem with non-librarians, and making library techies embarrassed to ask XML questions in the fora of the world.
Take a look at this insane example they provide on their website. If you’re a MARC junkie you might make something out of it, but if you are anyone else you’ll balk at the complexities thrown at you. And the really bad part is that this stuff ain’t complex, it only looks that way through crap XML. Here, being in XML is working against you. So, don’t show this to your parents.
Finally, forget that MARCXML ever came to be, and look to MADS and MODS instead. Anything but MARCXML. I beg you.
There’s a mystical place out there, a place drenched in magic, adventure and fantastic experiences, where your soul and senses meets the challenge of the human spirit. No, not the church nor the congressional house, but the library.
I love this place. I love this culture. But it needs your help, because it is dying. There’s many reasons for why this is so, but in my mind the two main reasons are that a) the book as a medium ain’t good enough as the world hits a certain complexity, and b) librarians still think that the library is mostly a place to find books on shelves.
As much as I could go on at length on both these things – and I probably will in the near future – right now I’d like to tell you a secret. It’s not a big thing and I might be wrong, but I’ve observed this little professional quirk the librarians embrace quite vigorously;
Librarians have no opinions.
Well, obviously not true as many of them have lots and great opinions which they’ve shared with me on a number of occasions. But when you ask them for information they will not tell you what book on the subject is the best, because, you know, we’re human, you have to find out that piece of opinion for yourself. Librarians are not supposed to say things like “you’d enjoy that book”, only (if they’re big risk-takers and on the edge of library society) “I enjoyed this book,” a doctrine that has a long and proud history in the library world. I know this is very academic and proper, but nowhere have I seen it so strong as in the library world.
I’m an avid library user. I’ve always loved libraries, and especially when I’m looking for some specific piece of information it’s the number one place to go. And as such heavy librarian use I’ve become pretty good at telling good from bad. Let’s define good as “getting you what you wanted or something even better” and bad as “getting something in the ballpark of what you wanted, with a feeling that there surely must be more?” These definitions work just as well with literature as with history and with science or any other reason you go to the library.
I think you can guess by now what I’m getting at; the good library experience comes through librarians who dare to challenge the stronghold of “no opinion.” When the librarian daringly points out that I might enjoy this book by some other author that what I was looking at, that’s when the magic happens! Serendipity!
Good librarians know to break this “no opinion” guideline. Good librarians know how to create magic and adventure. And I think good librarians know that this is their biggest and most wonderful weapon in the battle for the library of the future.
My good friend (who I miss dearly) Bobby “Slobo” Graham from the National Library of Australia kept telling me of a saying of sorts that I can’t recall the origins of;
“Serendipity is when you go to the library to take out a book, and end up taking out the librarian.”
At the library I’ve smiled, I’ve cried, I’ve danced, struggled, had love, made philosophy, drank the kool-aid and smelt victory. You should also. And tell those risque librarians that you love them. I know I do.
Low blog season for meg as life swooshes by and I’m preparing for the time that has already gone. But I do trust Google, and here’s why.
Ok, summer is over, and I’m back at work, the household is busy with their various stuff (night-school, school, kindergarten, mini-kindergarten, from oldest to youngest). We’re an extremely busy little flock, I dare say.
However, amidst all this family busy life there is a slow, slow, slow realization of the self that is going on. I’m getting old. And I’m not working with what I want to do right now. Seriously, if you’re going to have a super-busy life, at least I would like it to be busy with stuff I want to do. The family part is dandy. No, it’s the work part that keeps me up at night.
The company I work for is great. Seriously. Really great. It’s not their fault, it’s mine. You see, I think a lot, and a lot of my thinking is about stuff that my work really don’t do much of. I have dreams, passions and aspirations that travels well out of scope for the IT consultancy gig I do every day.
Like, building the most energy efficient, low-maintenance wave-based hydro-electro generator ever. Or the non-maintainence, micro-inducing, piggy-backed salt-to-freshwater plant ever thought of. Or the zero-impact house water system (gray and white at the same time). Or design the coolest child-friendly gadget that my daughter cries about at night for not getting. Or change organization processes and communication from bad to great. Or help people realize their own potential. Or, you know, make people happier.
I’m an ideas guy. I shouldn’t really be a boring consultant, at least not for customers who wants me to do exactly as they say. I’m into innovation, into doing new and exciting (but not stupid) things, bordering on crazy but never go as far as sick. I love to work with people to come up with new stuff, or refine old stuff. Fix problems. Solve the impossible. Free Willy. Save the planet. Create art.
I think a lot, and I’m getting older.
Ok, back to tuning my Tomcat context to understand proper REST patterns without interfering too much, and then fine-tune some Topic Maps optimizations in my RDBMS. *sigh* *pining for the fjords*
Sam’s got the blues …
… and we’ll be back to “normal” next week. Hope you’ve all had a great time. We went to Denmark for a quickie, and to the mountains of Trysil (Norway) for a weeks cabin-fever fun. We’ve had a good time, but it’s good to get back, too. Now, what was I doing again?
This morning was a good one. I got on the bus, armed with breakfast banana in hand, and right there in front of me sat fellow Topic Mapper Stian Danenbarger (from Bouvet), who happened to be living just literally down the road from me. I’ve been living at Korsvoll (in Oslo) for 6 months now without bumping into him, how odd is that?
Anyways, the last few days I’ve written about Language and Semantics and about context for understanding communication (all with strong relations to programming languages), and needless to say this became the topic (heh) of discussion on the bus this morning as well.
In this post I’ll try to summarize the discussion so far, implement the discussion I had on the bus this morning, coupled with a discussion I’ve had with Reginald Braithwaite on his blog, from “My mixed feelings about Ruby“. Let’s start with Reginald and move backwards.
Matz has said that Ruby is an attempt to solve the problem of making programmers happy. So maybe we aren’t happy with some of the accidental complexity. But can we be happy overall? Can we find a way to program in harmony with Ruby rather than trying to Greenspun it into Lisp?
I think that the goal of making programmers happy is a good one, although I suspect there’s more than one way to please a programmer. One way is perhaps rooted in the syntax of the language at hand. Then there’s the semantics of your language keywords. Another is to have good APIs to work with. Another is how meta the language is (i.e. how much freedom the programmer has in changing the semantics of the language, where Lisp is very meta while Java is not at all), and yet another is the community around it. Or the type and amount of documentation. Or its run-time environment. Or how the code is run (interpreted? compiled? half-compiled to bytecodes?).
Can we find ways in programming that would make all programmers happy? I need to now point back to my first post about Language and Semantics and simply reiterate that there’s a tremendous lack of focus on why we program in most modern programming languages. Their idea is to shift bits around, and seldom to satisfy some overal more abstract problem. So for me it becomes more important to convey semantics (i.e. meaning) through my programming more than just having the ability to do so. Most languages will solve any problem you have, so what does the different languages offer us? In fact, how different are they most of the time?
At this moment in time I have extremely mixed feelings about Ruby. I sorely miss the elegance and purity of languages like Scheme and Smalltalk. But at the same time, I am trying to keep my mind open to some of the ways in which Ruby is a great programming language.
I think we really agree here. My own experiences with over 8 years of professional XSLT development (yes, look it up 🙂 has taught me some valuable lessons about how elegant functional programming can be, just like Lisp and the mix-a-lot SmallTalk (which I like less of the two). But then I like certain ways that Ruby does things too, with a better syntax for one. I like to bicker about syntax. Yeah, I’m one of those. And I think I bicker about syntax for very good reasons, too;
In “just enough to make some sense” I talk about context; how many hints do we need to provide in order to communicate well? Make no mistake; when we program, we are doing more than solving the shifting of bits and bytes back and forth. We are giving hints to 1) a computer to run the code, and 2) the programmer (either the original developer, or someone else looking at her code). Most arguments about syntax seems to stem from 1) in which 2) becomes a personal opinion of individuals rather than a communal excericse. In other words, syntax seems to come from some human designer trying to express hints best to the computer in order to shift bits about, instead of focusing entirly on their programming brothers and sisters.
In the first quote about Ruby being designed in order to please the programmer, that would imply that 2) was in focus, but the focus of that quoted statemement is all wrong; it pleases some programmers, but certainly not all, otherwise why are we even talking about this stuff?
Ok, we’re ready to move on to the crux of the matter, I think.
I am arguing that while it is easy to agree that languages ought to facilitate writing readable programs, it is not easy to derive any tangible heuristics for language design from this extremely motherhood and apple pie sentiment.
Readability is an important and strong word. And it is very important, indeed. We need everything to be readable, from syntax to APIs to environments and onwards. I think we all want this pipe-dream, but we all see different ways of accomplishing it. Some say it’s impossible, others say it’s easy, while people like Reginald I think is right there in the middle, the ultimate pragmatic stance. And if I had never done Topic Maps I would be right there with him. Like Stian Danenberger said this morning, there’s more to readability than just reading the code well.
Yeah, it’s time talk about what happens when you drink the kool-aid and you accept the paradigm shift that comes with it. There’s mainly x things I’ve learned through Topic Maps;
- Everything is a model, from the business ideals and processes, to design and definition, our programming languages, our databases, the interaction against our systems, and the human aspect of business and customers. Models, models, everywhere …
- All we want to do is to work with models, and be able to change those models at will
- All programming is to satisfy recreating those models
Have you ever looked at model-driven architecture or domain-driven design? These are somewhat abstract principles to creating complex systems. Now, I’m not going to delve into the pros and cons of these approaches, but merely point out that they were “invented” out from a need that programming languages didn’t solve, namely the focus on models.
Think about it; in every aspect of our programming life, all we do is trying to capture models which somehow mimics the real-life problem-space. The shifting of bits wouldn’t be necessary if there wasn’t a model were working towards. We create abstract models of programming that we use in order to translate between us humans and those pesky computers who’s not smart enough to understand “buy cheap, sell expensive” as a command. This is the main purpose of our jobs – to make models that translate human problems into computer-speak – and then we choose our programming language to do this in. In other words, the direction is not language first then the problem, but the other way around. In my first post in this series I talked about tools, and about choosing the “right tool for the job.” This is a good moment to lament some of what I see are the real problems of modern programming languages.
Object-oriented programming. Now, don’t get me wrong, I think OOP is a huge improvement over the process-oriented imperative ways of the olden ways. But as I said in my last post, it looks so much like the truth, we mistakenly treat it as truth. The truth is there’s something fundamentally wrong with what we know as object-oriented programming.
First of all, it’s not labeled right. Stian Danenbarger mention that someone (can’t remember the name; Morten someone?) said it should be called “Class-based programming”, or – if you know the Linnean world – taxonomical programming. If you know about RDF and the Semantic Web, it too is based loosely on recursive key/value pairs, creating those tree-structures as the operative model. This is dangerously deceitful, as I’ve written about in my two previous posts. The world is not a tree-structure, but a mix of trees, graphs and vectors, with some semi-ordered chaos thrown in.
Every single programming approach, be it a language or a paradigm like OOP or functional, comes with its own meta model of how to translate between computers and the humans that use them. Every single approach is an attempt to recreate those models, to make it efficient and user-friendly to use and reuse those models, and make it easy to change the models, remove the models, make new ones, add others, mix them, and so on. My last post goes into much detail about what those meta models are, and those meta models define the communication from human to computer to human to computer to human, and on and on and on.
It’s a bit of a puzzle, then, why our programming languages focus less on the models and more on shifting those bits around. When shifting bits are the modus operandi and we leave the models in the hands of programmers who normally don’t think too much about those models (and, perhaps by inference, programmers who don’t think about those models goes on to design programming languages in which they want to shift bits around …), you end up with some odd models, which at most times are incompatible with each other. This is how all models are shifted to the API level.
Everyone who has ever designed an API knows how hard it can be. Most of the time you start in one corner of your API thinking it’s going smooth until you meet with the other end, and you hack and polish your API as best you can, and release version 1.0. If anyone but you use that API, how long until requests for change, bugs, “wouldn’t it make more sense to …”, “What do you mean by ‘construct objects’ here?”, and on and on and on. Creating APIs is a test of all the skills you’ve got. And all of the same can be said about creating a programming language.
Could the problem simply be that we’re using a taxonomic programming language paradigm in which we try to create a graph structured application? I like to think so. Why isn’t there native support in languages for typed objects, the most basic building block of categorisation and graphing?
$mice = all objects of type ‘mouse’ ;
set free $mice of type ‘lab’ ;
Or relationships (with implicit cardinality)?
with $mice of type (‘woodland’)
add relationship ‘is food’ to objects of type ‘owl’ ;
with $mice that has relationship to objects of type (‘owl’)
add type (‘owl food’) ;
Or workflow models?
in $workflow at option (‘is milk fresh?’) add possible response (‘maybe’)
with task (‘smell it’) and path back to parent ;
[disclaimer : these are all tounge-in-cheek examples]
No, most programming languages follow the tree-structure quite faithfully, or more precise the taxomatic model (which is mostly trees but with the odd jump (relationship) sideways in order to deal with the kludges that didn’t fit the tree). Our programs are exactly that; data and code, and the programming languages define not only the syntax for how to deal with the data and code, but the very way we think about dealing with blobs of data and code.
They define the readability of our programs. So, Reginald closes;
Again we come down to this: readability is a property of programs, and the influence of a language on the readability of the programs is indirect. That does not mean the language doesn’t matter, but it does make me suspicious of the argument that we can look at one language and say it produces readable programs and look at another language and say it does not.
Agreed, except I think most of the languages we do discuss are all forged over the same OOP and functional anvil, in the same “shifting the bits and byes back and forth” kind of thinking. I think we need to think in terms of the reason we program; those pesky models. Therein lies the key to readability, when the code resembles the models we are trying to recreate.
Syntax for shifting bits around
Yes, syntax is perhaps more important than we like to admit. Syntax defines the nitty-gritty way we shift those bits around in order to accomplish those modeling ideals. It’s all in the eyes of the beholder, of course, just like every programming language meta model have their own answer. What is the general consensus on good syntax that convey the right amount of semantics in order for us all to agree to its meaning?
There’s certain things which seems to be agreed on. Using angle brackets and the equal sign for comparators of basic types, for example, or using colon and equal to assign values (although there’s a 50/50 on that one), using curly brackets to denote blocks (but not closures), using square brackets for arrays or lists (but not in functional languages), using parenthesis for functional lists, certain keywords such as const for constants, var for variables (mostly loosly typed languages, for some reason) or int or Int for integers (basic types or basic type classes), and so on. But does any of this really matter?
As shifting bytes around, I’d say they don’t matter. What matters is why they’re shifting the bytes around. And most languages don’t care about that. And so I don’t care about the syntax or the language quirks of inner closures when inner closures are a symptom of us using the wrong tools for the modeling job at hand. We’re bickering about how to best do it wrong instead of focusing on doing it right. Um, IMHO, of course, but that’s just the Topic Maps drugs talking.
Just like Robert Barta (who I’d wish would come to dinner more often), I too dream of a Topic Maps (or graph based) programming language. Maybe it’s time to dream one up. 🙂
I’ve realized that my previous post on language and semantics could possibly be a bit hard to understand without having the proper context wrapped around it, so today I’ll continue my journey of explaining life, universe and everything. Today I want to talk about “just enough complexity for understanding, but not more.”
Let’s talk about mouse. Or a mouse. Mice. Let’s talk about this ;
One can argue whether this is really enough context for us to talk about this thing. What does “mouse” mean here? The Disney mouse? A computer mouse? The mouse shadow in the second moon? In order for me to communicate clearly with my fellow human beings I need to provide just enough information so that we can figure this out, so I say “mouse, you know the furry, multivorus, small critter that …” ;
This is too much information, at least for most cases. I’m not trying to give you all the information I know about mice, but just enough for me to say “I saw a mouse yesterday in the pantry.” Talking about context is incredibly hard, because, frankly, what does context mean? And how much background information do I need to provide to you in order for you to understand what I’m talking about?
In terms of language “context” means verbal context as words and expressions that surrounds a word, and social context as the connection between the words and those who hear or read them based on the human constraints (age, gender, knowledge, etc.) There’s also some controversy about this, and we often also imply certain mental models (social context of understanding).
In general, though, we talk about context as “that stuff that surrounds the issue“, from solid objects, ideas, my mental state, what I see, what I know, what my audience see and knows, hears, smells, cultural and political history, musical tastes, and on and on and on. Everything in the moment and everything in the past in order to understand the current communication that takes us to the future.
Yup, it’s pretty big and heady stuff, and it’s a darn interesting question; how much context do you need in order to communicate well? My previous post was indeed about how much context we need to put into our language and definition in order to communicate well.
A bit of background
Back in 1956 a paper by the cognitive psychologist George A. Miller changed a lot of how we think about our own capacity for juggling stuff in our heads. It’s a most famous paper, where further research since has added to and confirmed the basic premise that there’s only so much we’re able to remember at the same time. And the figure that came up was 7, plus / minus 2.
Of course that number is specific to that research, and may mean very little in the scheme of more specific settings. It’s a general rule, though, that hints to the limits we have in cognition, in the way we observe and respond to communication. And it certainly helps us understand the way we deal with context. Context can be overly complex, or overly simple. Maybe the right amount of context is 7, plus / minus 2?
I’m not going to speculate much in what it means that “between 5 and 9 equally-weighted error-less choices” defines arbitrary constraints on our mental storage capacity (short-term especially), but I’ll for sure speculate that it guides the way we can understand context, and perhaps especially where it’s loosely defined.
We humans have a tendency to think that those things that looks like the truth must be the truth. We do this perhaps especially in the way we deal with computer systems, because, frankly, it’s easy to define structures and limitations there. It’s what we do.
An example of this is how we observe anything as containers that may contain things, that in themselves might be containers which might be things or more containers, and so on. Our world is filld with this notion, from taxonomies, to object-oriented programming, to XML, to how we talk bout structures and things, to how science was defined, and on and on and on. Tree-structures, basically.
But as anyone with a decent taxonomic background knows, taxonomies don’t always work as a strict tree-structure. Neither does anyone who’s meddled in OO for too long. Or fiddled with XML until the angle-brackets break. These things looks so much like the truth that we pursue them as truth.
things are more chaotic than we like. They’re more, in fact, like graph structures, where relationships between things go back and forth, up and down, over and under already established relationships. It can be quite tricky, because the simple “this container contains these containers” mentality is gone, and a more complex model appears;
This is the world of the Semantic Web and Topic Maps, of course, and many of the reasons why these emerging technologies are, er, emerging is of course because all containers aren’t containers at all, and that the semantics of “this things belongs to that thing” isn’t precise enough when we want to communicate well. Explaining the world in terms of tree-structures puts too many constraints on us, so many that we spend most our time trying to fit our communication into it rather than simply defining them.
We could go back to frames theory as well, with recursive key/value properties that you find naturally in b-trees, where values are either a literal, or another property. RDF is based on this model, for example, where the recursiveness is used for creating graph structures. (Which is one reason I hate RDF, using anonymous nodes for literals)
Programming languages and meta models
Programming languages don’t extend the basic pre-defined model of the language much. Some languages allow some degree of flexibility (such as Ruby, Lisp and Python), some offer tweaking (such as PHP. Lua and Perl), while others offer macroing and overloading of syntax (mostly C family), and yet more are just stuck in their modeling ways (Java). [note: don’t take these notions too strictly; there’s a host of features to these languages that mix and match various terms, both within and outside of the OO paradigm]
What they all have in common is that the defined meta model is linked to shifting bits and bytes around a computer program, and that all human communication and / or understanding is left in the hands of programmers. Let’s talk about meta models.
Most programming languages have a set of keywords and syntax that make up a model of programming. this is the meta model; it’s a foundation of a language, a set of things in which you build your programs on. All programming languages have more or less of them, and the more they have, the stricter they usually are as well. Some are object oriented languages, other functional, some imperative, and yet other mixes things up. If I write ;
Int i = new Int ( 34) ;
in Java, there’s only so many ways to interpret that. It’s basically an instance of the Integer class, that holds the integer number of 34. But what about
$i = new Int ( 34 ) ;
in PHP? There is no built-in class called Int in PHP, so this code either fails or produce an instance of some class called Int, but we do not know what that means, at least not at this point. And this is what the meta model defines; built-in types, classes, APIs and the overall framework, how things are glued together.
As such, Java and .Net has huge meta models defined, so huge that you can spend your whole career in just one part of it. PHP has a medium meta model, Perl even smaller, all the way down to assembler with a rather puny meta model. Syntax and keywords is not just how we program, but they define the constraints of our language. There’s things that’s easy and hard in every language, and there is no one answer to what the best programming language is. They all do things differently.
The object-oriented ways of Java differ to the ones of Ruby which differs to the ways of C++ which differs to the ways of PHP. The functional ways of Erlang differs to XSLT which differs to Lisp.
The right answer?
There is no right answer. One can always argue about the little differences between all thse meta models, and we do, all the time. We bicker about operator overloading, about whether mutliple inheritance is better than single inheritance, one the real difference between interfaces and abstract classes, about getter and setter methods (or lack thereof), about types should be first class objects or not, about what closures are, wheter to use curly-brackets or define programming structure through whitespace, and on and on and on.
My previous post was another way of saying that we perhaps should argue less about the meta model of our language, and worry more about the reason the computer was created more than how a certain problem was solved? We don’t have the mental capacity to juggle too much stuff around in our brains, and if the meta model is huge, our ability to focus on perhaps the important bits become less.
There are so many levels of communication in our development stack. Maybe we should introduce a more semantically sane model into it to move a few steps closer to the real problem, the communication between man and machine? I’m not convinced that OO nor functional programming solves the human communication problem. let’s speculate and draw sketches on napkins.