Open Public Consultation : Review of the European Standardisation System

Citizens, businesses, public bodies, etc can respond to the consultation document (a sensibly-short 6 pages). The consultation document and information about responding is all online at http://ec.europa.eu/enterprise/policies/european-standards/public-consultation/index_en.htm .

The deadline is 21st May 2010.

Key Challenges in the Design of Learning Technology Standards – Observations and Proposals

Here is a paper I’ve just written, “Key Challenges in the Design of Learning Technology Standards – Observations and Proposals” (PDF 260kB),  as part of an exploration of the many-sided question: “how should we make learning technology standards?”

Quoting the abstract:
This paper considers some key challenges that learning technology standards must take account of: the inherent connected-ness of the information and complexity as a cause of emergent behavior. Some of the limitations of historical approaches to information systems and standards development are briefly considered alongside generic strategies to tackle complexity and system adaptivity. A consideration of the facets of interoperability – organizational, syntactic and semantic – leads to an outline of a strategy for dealing with environmental complexity in the learning technology standards domain.

I should add that this is not meant to be the last word on the subject but a contribution to an ongoing conversation. Please comment.

This paper appears in the “International Journal of IT Standards and Standardisation”  edited by Jan Pawlowski, Tore Hoel and Paul Hollins. Copyright 2010, IGI Global, www.igi-global.com. Posted by permission of the publisher.

Open Source, Open Standards and ReUse: Government Action Plan?

Yes, actually there is a document called “Open Source, Open Standards and ReUse: Government Action Plan“. This is the latest (Jan 27th 2010) statement from central government on the topic; previously an Open Source policy was hatched in 2004.

Really the document should be called “Open Source in ICT Procurement: Government Action Plan” as Open Standards get relatively little mention. Indeed, it would have been a more clear communication  if it had stuck to this scope. Having said this, there is evidence of a clear and purposeful approach. Here are a few snippets that I thought worthy of mention…

The Government will expect those putting forward IT solutions to develop where necessary a suitable mix of open source and proprietary products to ensure that the best possible overall solution can be considered. Vendors will be required to provide evidence of this during a procurement exercise. Where no evidence exists in a bid that full consideration has been given to open source products, the bid will be considered non compliant and is likely to be removed from the tender process.

“The agreement to the Cross Government Enterprise Architecture framework and its acceptance by the Government’s major IT suppliers has enabled the disaggregation of ‘closed’ business solutions into component requirements. This which allows sharing and reusing of common components between different lines of business.”

“We have clarified that we expect all software licences to be purchased on the basis of reuse across the public sector, regardless of the service environment it is operating within. This means that when we launch the Government Cloud, there will be no additional cost to the public sector of transferring licences into the Cloud.”

These, and much else in the document, show a clear focus on saving public money in the medium to long term. Great! The actions seem realistic from the point of view of implementation by public administrators. It will take some time but they seem to be pointing in the right direction and committed to fair comparison of OSS vs proprietary software.

There are also a number of references to “Open Source techniques and culture”. These deserve a “D: good effort” to my mind and are rher more challenging for government, civil servants etc. From my experience, Open Source culture and public administration culture (especially in central government) are not particularly close. That’s just the way it is and I’m glad that culture change isn’t the priority in this document. To be fair, they are trying and making some progress but I’m not expecting open email reflectors – e.g. Apache Foundation – to be anything but highly unusual and little things give it away such as the absence of any licence or IP assertion on the document, let alone a Creative Commons or GNU Copyleft licence.

In spite of the above qualifications: ‘good effort HMG CIO Council, keep at it!’ And in the medium term, there are some clear opportunities for open-minded suppliers who understand how to work with OSS in their portfolio.

There is also the Government ICT Strategy, which is the umbrella for the document I am referring to above. This includes lovely names such as the “G-Cloud” (government cloud) and “G-AS” (applications store) but I’ve not digested the content yet…

The Paradox of the Derivative Work

At last week’s  Future of Interoperability Standards in Education meeting, one of the issues that came up in the discussion group that I was in was that the creation of “derivative works” was a serious unresolved issue. I summarised this in the plenary feedback as “The ability to create derivative works is an ESSENTIAL issue. There are cases when divergence is damaging but also when [necessary] derivation is prevented. How to resolve this paradox?” This is rather cryptic as it stands so I will expand.

The paradox  is that derivation from one standard (I am using “standard” loosely to include pretty much any documented set of technical conventions) to create another is both desirable and undesirable. It is desirable because communities and applications differ, because standards mature, etc and one size will not fit all. It is undesirable because benefits are realised when more people do something in the same way, not to mention confusion arising from proliferation. It seems that there is a Network Effect with standards. I like describing this as a “paradox” as it conveys the idea that we might not be looking at the problem in the right way. An alternative description might be that there are “conflicting issues” in educational technology standardisation (see Dan Rehak’s position paper).

Having discussed this issue with a couple of people since the meeting and reflected a little, I would like to explore how we might start to resolve to the paradox (I do not aspire to actually resolve the matter into self-evident statements). My thinking has similarities to the capabilities and maturity model in Dan’s paper in trying to separate out tangled concepts.

I believe there are three strands to tease out:

  1. “Derivation” covers a multitude of different kinds of use. The term “derivative work” has an overlay of meaning from mainstream writing and publishing that is probably not appropriate for many of these “kinds of use”.
  2. There is a spectrum of intellectual contribution to a standard from the development of conceptual models to the creation of the published document.
  3. “Standard”  covers a multitude of different kinds of artifact.  Attempts to apply labels such as “formal”, “informal” or “specification” usually lead to fruitless argument.

Kinds of Derivation

I am referring here to derivation of a published document (and again using a loose meaning for “standard”). Looking at the different kinds of derivation, with labels-of-convenience that are not intended to following any conventional definitions, I suggest that some of the kinds of derivation that are relevant to standardisation are:

Ratify (cf. “ratify a treaty”)

The standard is taken as-is from its source. Although it may be re-published or referred to by a new identifier or name it is not revised. This form of derivation might be used to create a national standard that mirrors an international one. There would normally be a standing agreement that ratification can or should occur. It is immaterial from a technical point of view which one is used.

Adopt (cf. “adopt a child”)

The standard is taken on by a new organisation or ad-hoc group and the existing organisation/group relinquishes its ownership. “Ownership” implies full control over the future development, publication, transfer of rights etc. So long as the transfer is properly communicated, adoption should not necessarily lead to negative effects.

Spin-off

A snapshot of the standard is taken by a new organisation and reworked according to its documentation conventions. This is a kind of “re-work” (see below). The new work is compatible at a technical level (syntactic and semantic). The new organisation manages the creation and (editorial) maintenance within the bounds of technical compatibility while the originating organisation can continue to exert full control over its version. It is immaterial from a technical point of view which one is used at the point of departure but the originating organisation must accept more constraints on future plans as they cannot deprecate the spin-off (which will have its separate implementers).

Profile

A new work is created that includes elements of a published standard by reference. The new work may include extensions, value lists (aka vocabularies) and additional constraints. Profiling is only possible when the published standard is both persistently available (as a specific version) and structured in a way to allow for the necessary references to be made. This is not a re-work; it is more like original work with citations. While we may wish to avoid needless proliferation of profiles in the interests of realising a Network Effect, profiles are significantly less damaging than re-works as they make clear the points of reuse and divergence.

Re-work

A new work is created that takes an original and makes changes: additions, modifications and deletions. When both the original and re-work are in circulation confusion is created and the effectiveness of both new and original work is harmed. This is what I would expect would conventionally be referred to as a “derivative work”.

Spectrum of Intellectual Contribution

I am not an expert in intellectual property law and may have committed faux pas: use the comment facility.

The concept of “derivation” as expanded above does not apply equally to all of the stages of activity that underpin the publication of a standard. Here, I try to stereotype four kinds of contribution for which “derivation” is only relevant to the second two. The practical difficulty is that these kinds of contribution are often mixed together in the process. Maybe we should look to separating them into pairs and applying different processes. The stereotypes are:

Development of Conceptual Model

I recognise that following is rather shallow from a philosophical point of view and that I am adopting something of a social constructivist point of view.

Conceptual models are shared abstractions of the world. At some point in time a conceptual model must be documented in the standards process but the conceptual model is a social knowledge-construct independent of its representation/documentation. Hence conceptual models are not subject to ownership or intellectual property assertions. If it is just my idea it is not a shared abstraction: not a conceptual model. The development of a conceptual model requires broad participation and discourse to be accurate and hence useful. Evolution of a conceptual model that is documented in a published work should not be considered “derivation” of that work.

Development of Technical Approach

This would include the creation of information models, decisions on patterns and components to profile, technical trialling etc. This is the solution to a problem independent of its description. It is the knowing-how-to: techne. This kind of contribution is the realm of patent law. Contributors should expect to contribute under RAND or royalty-free terms but not to transfer all rights or they may choose to make public non-assertion covenants. A contributor is free to re-use their contribution (NB not the standard incorporating it) but not necessarily the contributions of others. This re-use is not “derivation” (as above).

Contribution of Prior Work

This category of contribution may be broken down along the same lines as “Kinds of Derivation”.

Creation of Published Document

The creation of content, review and editing of “the standard” as a published work is clearly the most concrete part of the process. Without the precisely documented expression, the underpinning conceptual model and technical approach are not directly useful as a standard. It is at this end of the spectrum that contributors should expect to grant ownership of their contribution to another legal entity. We are in the realm of copyright law and “derivation”.

Formal/Informal or Standard/Specification

I have a hunch that applying any of these labels or trying to define them is liable to cause or contribute to more confusion or argument than it is worth.

Can Grassroots Action “Save” the Education Technology Standards World from Itself?

In the approximately-ten years that most of the well-known Ed Tech Standards bodies have been in existence, it has been hard work to make but a little progress. Why is this? I believe one factor is that there was a premature rush to create high-status specifications and formal standards. There is, however, some light at the end of the tunnel as there is growing evidence (anecdote maybe?) that more grass-roots models may be effective.

I have written a short document to explore this and possible synergies between formal and informal approaches (MS Word) as a  position paper for a meeting on Jan 12th 2010. Other position papers may be found on the meeting page.

Meritocracy in Open Standards: Vision or Mirage

Few would argue for privilege over merit in general terms and the idea of “meritocracy” is close to the heart of many in the Open Source Software (OSS) community. How far can the ideal of meritocracy be realised? Are attempts to implement meritocratic principles in the development of open standards (using “standards” to include virtually any documented set of technical conventions) visionary or beset my mirages?

What follows is a first pass at answering that rather rhetorical question. I have avoided links as I’m not trying to point fingers (and you would be wrong in thinking there are any between-the-lines references to organisations or individuals).

A meritocracy requires both a dimension of governance and a dimension of value. The latter, “value”, incorporates both the idea that something should be measurable and that there is consensus over desirable measure and its association with positive outcomes of the endeavour. In the absence of the measurable quantity that could be applied in a bureaucratic way we have a hegemony or a club. The Bullingdon Club is not a meritocracy. I suggest the following questions should be asked when considering implementing a meritocracy:

  1. Have we recognised that a meritocracy must be situated in a context? There must be some endeavour that the system of merit is supporting and the suitability of a meritocratic system can only be judged in that context. There is no universal method.
  2. Do we understand what success looks like for the endeavour? What are the positive outcomes?
  3. Is there a human behaviour or achievement that can be associated with realising the positive outcomes?
  4. Are there measures that can be associated with these behaviours or achievements?
  5. Can this/these human endeavours be dispassionately evaluated using the measures?

Clear and coherent answers can be provided to these questions for OSS endeavours focussed on fixing bugs, improving robustness, improving performance etc. The answers become rather more vague or contentious if we start to include decisions on feature-sets, architecture or user interface design. Many successful OSS efforts rely on a different approach, for example the benevolent dictator, alongside some form of meritocracy.

So: what of “meritocracy in open standards”? Posing the five questions (above), I suggest:

  1. The context is open standards development. There are differing interpretations of “open”, generally revolving around whether it is only the products that are available for use without impediment or whether participation is also “open”. It only makes sense to consider a meritocracy in the latter case so we seem to have a recognisable context. NB: the argument as to whether open process is desirable is a different one to how you might govern such a process and is not addressed here
  2. Success of the open standards endeavour is shown by sustained adoption and use. Some people may be motivated to participate in the process by ideas of public good, commercial strategy etc but realising these benefits are success factors for their participation and not of the endeavour per se. I would like to place questions of morality alongside these concerns and outside consideration of the instrument: open standards development.
  3. This is where we start running in sand inside an hourglass. Anecdotes are many but simple relationships hard to find. Some thoughtfully constructed research could help but it seems likely that there are too many interacting agents and too many exogenous factors, e.g. global finance, to condense out “simple rules”. At this point we realise that the context should be scoped more clearly: not all areas of application of open standards have the same dynamics, for example: wireless networking and information systems for education.  Previous success as a contributor to open standards may be a reasonable indicator but I think we need to look more to demonstration of steers-man skills. The steers-man (or woman) of a sail-driven vessel must consider many factors – currents, wind, wave, draught, sea-floor, etc – when steering the vessel. Similarly, in open standards development we also have many factors influencing the outcome in our complex system: technical trends, supplier attitudes (diverse), attitudes of educational institutions, government policy change, trends in end-user behaviour…
  4. Not really. We could look to measures of approval by actors in the “complex system” but that is not a meritocratic approach although it might be a viable alternative.
  5. Not at all. Having stumbled at hurdle 4 we fall.

It looks like meritocracy is more mirage than vision and that we should probably avoid making claims about a brave new world of meritocratic open standards development. Some anti-patterns:  “Anyone can play” is not a meritocracy; it depends on who you know, its not a meritocracy. The latter, cronysim, is a dangerous conceit.

There are many useful methods of governance that are not meritocratic; i.e. methods that would satisfy an “act utilitarian”. I suggest we put merit to one side for now or look for a substantially more limited context.

Progress on IMS Learning Information Services (formerly Enterprise v2)

Here are some slightly-edited notes I provided to colleagues following my attendance at the October 2009 IMS Quarterly Learning Information Services (LIS) project team meeting. LIS is the next generation of what has previously been called “IMS Enterprise” and brings together the capabilities of batch processing (original IMS Enterprise) and the more “chatty” capabilities of IMS Enterprise Services alongside other refinements and additions.

The meeting was mostly oriented around:

  1. Demonstration by Oracle and a presentation by Sungard about their implementation of LIS
  2. Discussion on (minor) modifications to support requirements from IMS Learning Tools Interoperability (LTI)
  3. Mapping out next steps

The headline news is that a public draft release is expected in December this year. The core specification is judged to be fit for purpose and further work, after public draft, will focus on the definition of the “HE profile” and the conformance requirements.

Conformance specification and testing is acknowledged to be a difficult problem but there is interest in using BPEL to create what are effectively unit test scripts. Oracle seems to have taken this approach and there is some literature relating to it. It is my conjecture that a library of unit tests (in BPEL) managed by the to-be-instantiated “Accredited Profile Management Group” for LIS in IMS would be a practical approach to testing implementations of LIS.

The demonstrations:

Linda Feng (Oracle) showed their Campus Solutions “Student Administration Integration Pack” working with Sakai (Unicon version), Inigral Schools on Facebook app) and Beehive (Oracle collaboration software). Linda has recorded a ViewLet (best viewed full-screen). The Sakai integration used “normal” LIS SOAP web services but the other two used an ESB (specifically the Oracle service bus). The Beehive case is worthy of note as the integration was achieved, as I understand it, without any code mods to Beehive: LDAP was used for core person (an LDAP binding for LIS has been developed) and the existing REST API for Beehive was translated to from LIS SOAP via the ESB. Inigral is also REST based. It was reported that the Beehive integration took a couple of person weeks to achieve and I can see quite a few people following the ESB route.

Sungard had only just completed a code sprint and were not in a position to demo. They expect to have both batch and chatty versions completed in Q1 2010. They did comment that many customers were already using “old” Enterprise batch processing but fully intend to move to LIS (and leap Enterprise Services v1.0).

I gather Moodle Rooms is also close to completing a LIS implementation although this is probably currently implemented against an old draft of LIS (the press release is cagey and doesn’t mention LIS)

At the terminal “summit day” of the quarterly, Michael Feldstein did a showman-like intro to LIS which was video-ed (I’ll link it in when posted) and he has subsequently blogged about supplier inclinations towards LIS.

The Problem with “Evaluating Standards”

I’ve just uploaded an attempt, “Evaluating Standards – A Discussion of Perspectives, Issues and Evaluation Dimensions” (MS Word), to say in more than a few words why “Evaluating Standards” is easier to say than to do. For most of the issues there are no easy answers but I have tried to make some suggestions for a heuristic approach inspired by the Neilsen and Molich approach to usability. I’d like to acknowledge Scott Wilson for contributing his insight into what makes a good standard.

Joining Dots at the IMS September 2008: Learning Design

Last week (15-18 Sept) was the IMS Quarterly meeting, hosted by JISC, in Birmingham (UK). It was a rather unusual meeting as all of the sessions were open to non-members. As usual, however, it concluded with a “summit” day where various interesting people shared their ideas. I’m sure everyone joined a different set of dots (For readers not familiar with the culture I belong to, “joining dots” is concisely explained on wikipedia). For me, the shape of the animal that is the role of IMS Learning Design (LD) became more clear. Actually I think it might be a family.

During the week there were several demonstrations of current generation LD tools and it is certainly true that these are an order of magnitude more usable than the first round of tools. These days you don’t have to know or understand the IMS specification, either its conceptual model or the technical details, to use the tools. Gilbert Paquette showed us TELOS, which is impressive and takes a graphical approach to visualising the workflows. Dai Griffiths and Paul Sharples showed two products of the TENCompetence project: ReCourse, a mixed graphical and tabular LD authoring tool that hides complexity and Wookie, a widget-based approach to providing the “services” (forum, chat, voting etc). Fabrizio Giorgini showed work from the PROLIX project, oriented towards work-place staff development, where Giunti have extended their eXact Packager to include a graphical LD editor.

In spite of the substantial progress demonstrated by the above software, we will still hear even tech-savvy academics exclaim “impressive but I can’t see how I’d ever use it” (not a real quote) or “I keep feeling that IMS LD was a solution looking for a problem and havent yet seen anything that solves any problems I have in learning & teaching.” (David Davies). I don’t think we can address this by talking about LD. Rather we need to talk about talking-about LD.

Dai Griffiths Multiple Uses of LD
Dai Griffiths Multiple Uses of LD

Dai Griffiths made some observations about LD that, if you will indulge me in continuing my dot-joining metaphor, I think pointed out which way up the paper is. He said: “The history and multiple uses of the specification mean that it is a complex artefact with many perspectives on it.” He produced a diagram (following) to expand on this point.

For me, the diagram did more than expand on the point: it gave me an indication of a profitable way of reducing the complexity of our discussions by being clear that there is more than one way to perceive LD. Unless we can move discourse onto a more differentiated set of conversations, I believe we will not be able to really get anywhere with LD or, indeed, make much progress in dealing with the challenges the creators and proponents of LD believe it can address.

The situation of LD is not unique and there has been some interesting work exploring the concept of “enactment” in relation to Ecological Modelling Language conducted under the Comparative Interoperabilty Project. In this work Miller and Bowker say: “Jane Fountain invites us to distinguish between an ˜objective technology “ that is to say a set of technical, material and computing components such as the Internet “ and an ˜enacted technology “ that is to say the technology on the ground as it is perceived, conceived and used in practice, in a particular context.”

So, what are we to do? Where should we start?I speculate that we should begin by clarifying what LD is. Bill Olivier had opened the “summit” with some reflections on the work of IMS and JISC in support of interoperability and thoughts on where we as a community could profitably work in the future. He described the work of IMS on data models as being more akin to domain modelling and I think this may be a good insight. Domain models are necessarily rather more abstract and application agnostic than most people care to deal with. I think they are fundamentally models of “objective” rather than “enacted” technology. I believe we should accept and embrace this and conclude that LD is a language for technologists to coordinate the creation of artefacts that are the subjects of the different differentiated discourse I referred to earlier.

There are two parallels with the eFramework to be made here, but it would be a diversion to go into detail. It too has a “history and multiple uses” and consequentially there are many perspectives. The second parallel is that one of the purposes of the eFramework initiative is to enable dialogue within and across domains through the emergence of an explicit vocabulary appropriate to a service-oriented approach.

As a candidate for one differentiated conversation, I suggest picking up on another pearl from Dai Griffiths: you can consider LD to be about provisioning a learning environment. “Provisioning” is a bit of a jargon term for setting-up-what-you-need. If you start a new job, you expect a number of facilities to be provided: desk, computer, security card, payroll, user id, staff handbook …. The equivalent provisioning of a learning environment entails the marshalling of resources, conversation (forum/chat) and other “tools”, assignment to groups etc. It is online classroom management of a sort. Let us now have a conversation about this “thing” that you can drop into a virtual learning environment that magically does all of the provisioning for e.g. a 2 way online debate. Its a “wizard”: just add a few Word docs, choose how groups are assigned and its done. This isn’t a new use case; I discussed something very similar with Bill Olivier 5 years ago. I do think, though, that this is closer to the language of enactment.

Is this an application profile? i.e. is it a definition of which data elements to use, vocabularies and extensions. Not exactly: an application profile may emerge as a necessity but it would be prudent to be clear what the application is first and that entails discourse in the language of enactment not the domain model. As a closing aside, I would like to stress that I see an “application profile” as being a quite opinionated work; it should be for a purpose.

I am conscious that this is a some-what under-developed argument, probably with numerous errors and certainly with leaps of faith but I think it is time to expose my thinking out loud for criticism and to leave this piece definitely un-concluded…

The presentations referred to above are available on the web, linked from the agenda.