Weak Signals and Text Mining I – An Introduction to Weak Signals

“Weak Signals” is a rather fashionable term used in parts of the future-watching community, although it is an ill-defined term as evidenced by the lack of a specific entry in Wikipedia (there is only a reference under Futurology). There is an air of mystique and magic about Weak Signals Analysis that turns some people off, me included, but I have come to the conclusion that a sober interpretation of the idea can be provided. This is what we are trying to do in a work package led by the Zentrum für Soziale Innovation (strapline “all innovations are socially relevant”) in the TELMap project. This work combines two approaches: one with direct engagement with people, our “human sources” track, and one looking at “recorded sources”, i.e. existing written texts. My area of interest, and that of colleagues at RWTH Aachen University is in the recorded sources. This post provides an introduction to the work, I hope a sober interpretation of “weak signals”, and a following post will outline some initial ideas about how text mining might be used.

A Weak Signal is essentially a sign of change that is not generally appreciated. In some cases the majority of experts or people generally would dismiss it as being irrelevant or simply fail to notice it. In these days of social software and ubiquitous near-instantaneous global communication the focus is generally on trends, memes, etc. Thought leaders of various kinds – individuals and organisations – wield huge power over the focus of attention of a following majority. The act of anticipating what the next trend/meme/etc will be could be construed as looking for a weak signal. There are a number of problems with identifying these and a naive approach is bound to fail; for example, to ask people to “tell me some weak signals” is equivalent to asking them to tell of something they think is irrelevant which might be important. Neither can you ask the experts, by definition. The point here is that the person who spots a sign of change may well be an outsider, on the periphery or be in a despised sub-culture.

In spite of Weak Signals being a problem concept, the fact remains that to anticipate change would give an innovator an advantage and potentially help an agent in the mainstream to avoid being blind-sided. To make even a small contribution here is part of the mission of both TELMap and CETIS. Our intention is to divert some attention away from the hot topics of the day and to discover some neglected perceptions or ideas that are worthy of more attention, both social attention and analytical investigation. This intention, and an assertion that we only ever consider Possible Weak Signals, is my “sober interpretation”. There is no magic here, no shamanic trance leading to revelation.

There is ample literature around the topic of Weak Signals but I will only mention a couple. Elina Hiltunen is a well-known figure, see for example some slides and references (pdf), in which she gives an informal checklist for weak signals (quoted with minor changes to the English) that should be viewed as indicative of necessary rather than sufficient criteria:

  1. Makes your colleagues to laugh (ridicule)
  2. You colleagues are opposing it: no way, it will never happen
  3. Makes people wonder
  4. No one has heard about it before
  5. People would rather that no-one talks about it anymore (a tabu)

Two more Finns, Leena Ilmola and Anna Kotsalo-Mustonen discuss the importance of filters: “When monitoring their operating environments for weak signals and for other disruptive information companies face filters that hinder the entry of the information to the company”. Substitution of “technology enhanced learning community” for “company” gives us our initial problem statement.  Ilmola and Kotsalo-Mustonen describe thee kinds of filter following earlier work by Igor Ansoff, who is generally credited with introducing the concept of Weak Signals in the 1970’s:

  1. The surveillance filter. Colloquially, “just looking under the street-lamp”. The obvious compensator for the surveillance filter in our situation is diversity of recorded sources.
  2. The mentality filter. We tend to only notice things that are relevant to our immediate context and problems. Information overload and tendencies to conform to social norms and be influenced by fashion compound the effects of people working “in the trenches”. By using text mining approaches we hope to compensate for these problems by filtering information in the recorded sources in a mental-model-agnostic manner.
  3. The power filter. The signals of change that lead to change of strategy or action do so through an existing power structure and become filtered according to political considerations. Ideas that challenge the status quo are threatening. As for the previous filter, we hope to avoid some of the effect of the power filter, although not entirely. Most recorded sources have already been subjected to implicit (many bloggers self-censor to protect their job/career) or explicit (e.g. Journal or magazine articles) power filters.

The adoption of a text mining approach over a diverse range of recorded sources offers a promising means to draw out some Possible Weak Signals, although I am clear that text mining will be challenging to apply and that it will only be useful in tandem with human engagement. Given an initial list of possible signals, it will be necessary to apply some heuristics such as the Hiltunen checklist to try to reduce “noise”. These can then be used to facilitate discussion and disputation, cross-reference with other studies and with the conclusions of our “human sources” leading to ever shorter lists. If we find a few cases where people say “you know what, that isn’t so crazy after all”, or similar, I will consider the activity to have been a success. The next post summarises mainstream text mining approaches. describes how Weak Signals considerations affect the selection of text mining methods and outlines some ideas for application of text mining to look for possible weak signals.

An Informal “Horizon Scan” from CETIS

For the past three years we have created a largely internal and informal “horizon scan” of technology trends and issues of interest and relevance to members of CETIS. The 2011 edition (.doc) was created in March and has just been uploaded to a public URL. All three are available in “.doc” format under the Resources section of our “Horizon Scan” topic page.

These should be seen as a set of un-processed perceptions rather than the product of a formal process; a great deal of ground is not scanned in this paper and it should be understood that no formal prioritisation process was undertaken. They are, therefore, not at all comparable with NMC Horizon Reports. The CETIS Horizon Scan should be seen as a set of potentially-ideosyncratic “takes”, material on which discourse and disputation may occur to make possible futures more clear.

eBooks in Education – Looking at Trends

eBooks seem to be appearing more frequently on trains and to be more talked of in educational settings but what are the trends behind these perceptions? One way of responding to this question is to use Google Trends or the “beta” Google Insights for Search. Clearly, this is only one perspective on a rather complex landscape of what people are doing in practice. I will describe some of the issues involved in using this data and the statistical tools I used (various features and contributed modules in the R “environment for statistical computing and graphics”) in a separate article. The essence of matter is that there are a whole series of biases, caveats and glossed-over statistical complexity.

Starting Out

After some tinkering with both Google Trends and Insights for Search, the facility of “Insights” to show trends filtered by categories encouraged me to opt for it rather than using “Trends” in spite of concerns about reliance on an unspecified categorisation mechanism.

The starting point is to access data for the search term “ebook” with worldwide coverage but filtered according to the “Education” category (a subcategory of “Society”). The time-based series looks like (this and all other images link to larger versions that open in a new window/tab):

Worldwide searches for "ebook" categorised as "Education"

Aside from some apparently-random fluctuation, there seems to be some pattern:

  • a slight decline between 2004 and early 2007
  • an exponential rise from early 2007
  • small steps up around year ends

Consulting the full Insights report (link above) shows two further points of interest:

  • India seems to be a hotspot. NB: Google has normalised the data so this means a greater fraction of Indian searches were for “ebook” compared to fractions in other countries.
  • “fac” and “ebook fac” appear to be “top searches” yet seem to be meaningless.

Digging Deeper – What is the meaning of “fac”?

Crossword enthusiasts may have already come to an intuitive conclusion which seems to be borne out by Googling for “fac”; these “top searches” look like typos for “facebook”.

So, we should modify the query to “Insights” to discount searches including “fac”, while retaining worldwide coverage and the Education filter. The resulting time-series looks like this (link to the full report):

Searches for "ebook" excluding "fac"

The broad trend and end-of-year seasonality seems to be still present but there seems to be more randomness and a few new features in 2009 and 2010. Having removed “fac” has also shown up some interesting new “top searches” in addition to the to-be-expected combination of “free”. “torrent” and “libre”: “toefl” and “gmat”.

Acronyms and abbreviations are often problematical to interpret as search terms but “toefl” and “gmat” have very clear meanings within an educational context.The first is the Test of English as a Foreign Language and the second is the Graduate Management Admissions Test. Both are associated with access to Higher Education and both are conducted using computer-based tests (although not exclusively). A separate investigation into trends related to these two terms might be interesting but has not been undertaken.

Decomposing “ebook -fac”: Trends, Seasonality and Transients

The time series for searches on “ebook” excluding “fac” was analysed using R (via the RKWard graphical interface). The broad question investigated was: what is the underlying smooth trend indicated by the data and what, if any, seasonal variation is there. Discrepancies between the trend +  seasonality and the data might be due to random variation or transients with a particular cause.

The “Seasonal Decomposition of Time Series by Loess” (Cleveland et al) was used after taking the logarithm of the count data. The outcome of this is:

STL Decomposition

(recall that I am omitting detail from this article; jumping straight to this set of graphs is to omit quite a lot)

A number of features are becoming more clear but note that the scales for each of the four plots differ and that these plots are for a decomposition after taking logarithms:

  • The data appears to be a lot more noisy. Whereas the previous plots as provided by Google are smoothed to give monthly values, the data acquired for analysis has a weekly level of granularity.
  • The overall trend is in line with an intuitive response to the previous plot. The decomposition indicates a flattening of interest during 2010. We should probably wait a little longer before making confident predictions that we are entering a plateau period.
  • The remainder, which is the difference between the data and the estimated (seasonal + trend), contains some extreme values in 2004 and 2005. The inability of the decomposition algorithm to handle these suggests that there was quite a lot of excitability about the topic of “ebook” among Google searchers which seems to have subsided during the strong upward trend. This suggests a solid foundation rather than hype in the latter period.
  • A seasonal pattern does appear to have been detected but it is of a similar magnitude to the remainder.

A closer look at the degree of fit is not so easy using the chosen decomposition method; more sophisticated methods would give a wider range of diagnostic measures. The chosen method does, however, include an iterative procedure where difficult-to-fit data points are given lower weightings. This gives us a slightly more robust means of assessing where transient (“excitable” behaviour) or aberrant data may be. The following plot has a somewhat arbitrary vertical scale and combines the count data (weekly) as a black line with red circles indicating a degree of “transience/aberrance”:

transience-aberranceRed circles on the baseline indicate datapoints with a weighting of 1 whereas those at the top were assigned a weighting of zero. This shows that the un-fittable data was indeed located within 2004 and 2005, the period of “excitability” proposed from considering the remainders. It also confirms that the rather peaky appearance during 2010 was fitted and is not a transient (NB that, since logs were taken the seasonal graph shows a multiplier to the trend not a simple addition when we revert to looking at the count data)

It is interesting to consider the Gartner Hype Cycle at this point as did my colleague, Stephen Powell. From the above trend line and the transience apparent above, it is tempting to suggest a slightly different way of looking at it than the Gartner plot. The trough of disillusionment through to the plateau of productivity can be imagined from 2006-2011. The “hype” in the ebook data is not so much a positive bulge but a period of “excitement”, of frequent but irregular transient peaks. From the point of view of analysts, pundits and know-it-all bloggers, this might have seemed like a peak of inflated expectations but aggregate normal people and the peak is largely suppressed.

A Closer Look at the Seasonal Pattern

The magnitude of the seasonality is more easily seen if converted back from the log scale and smoothed. A 5-week moving average gives the following plot, which shows a seasonal variation of around +25% and -20% from the trend. Since logs were taken, the seasonality is expressed as a multiplication factor rather than an absolute count fluctuation.

smoothed-seasonal-count2

Bearing in mind that this seasonality is of a similar magnitude to the “remainder”, one should be cautious in the absence of a plausible explanation. Recall that the data was filtered according to the “Educational” category. Maybe this pattern is reflective of a broader cycle of interest in matters educational rather than ebooks per se. Could this be related to term dates in educational establishments?

Fortunately, Google Insights for Search provides the trend at the level of the entire category. Take a look at the “Growth relative to the Education category” on the Insights report page.

If the total level of activity in the Education category is used as a factor to rescale each week’s data point and the resulting values then processed as above, the following decomposition emerges:

stldecomposition-rescaled

It is quite clear that the seasonality in the “ebook -fac” case cannot be explained away by background seasonal variation in the Education category as the magnitude of seasonal variation has in fact increased. The distribution of the remainder is also seen to be similar from inspection of quartiles and mean, although a detailed analysis was not undertaken (there are thought to be too many influences to make this valid). It seems to be the case that different seasonal patterns are at work in the category as a whole and the “ebook -fac” search term. Different regions are expected to have different seasonal patterns in general so it may be the case that the observed differences reflect regional variation.

Considering One Country – the UK

One of the weaknesses of Insights is that you must choose between worldwide or single-country specific data. It is not possible to choose a set of English-speaking countries, for example, or a set of European countries. The latter poses additional challenges around mapping language-specific terms to concepts, although in the present case, “ebook” is used in non-English speaking countries alongside native language equivalents.

Focussing on one country may make the seasonal component more easily interpreted. The same method was employed, except “ebook -fac” is taken as the starting point. Insights provides the following plot (and report):

UK Searches for "ebook" (excluding "fac"), filtered by Education category

Until Spring 2008, however, Google judges there to be an insufficient search volume to provide data for each week in spite of there being a line shown in the above plot from mid-2005. The reason for this discrepancy is unclear but the consequence is that the decomposition has a start date of April 6th 2008. Without going into the details, it is again found that rescaling by the background pattern in the Education category does not magically reveal insights. Hence the unscaled decomposition is considered:

stl-decomposition1

Having a shorter period to decompose means it is challenging to pick out seasonal patterns and the search volume is clearly more volatile than previously. Given the size of the remainder, it seems questionable to infer anything from the decomposed seasonality. Only two new  observations seem to remain:

  • there is no evidence for a plateau in the trend
  • there is a clear negative deviation around June/July 2010 indicated by a cluster of sequential negative remainders. This is also borne out when looking at the weightings (the “transience/aberrance” plot)

It is tempting to suggest that there is rather more “excitability” in the UK during 2008-present than in the worldwide picture. This is rather speculative but it is thought that a low search volume alone would not account for the noisiness. The mid-2010 negative deviation is a mystery; it would be easier to explain-away a positive transient. Maybe a repeat of the analysis in 6 months time will show that we have indeed plateaued and that the transient was actually a positive one in late 2010.

Conclusions

All of the above is contestable and I encourage disputation and alternative interpretation. In terms of interesting putative observations, my top 5 are:

  1. India is a relative hot-spot, coupled with TOEFL and GMAT appearing with “ebook”.
  2. There is a possible correlation with the Gartner Hype Cycle, albeit with a revised interpretation.
  3. A seasonal pattern in worldwide activity  remains unexplained.
  4. The UK seems to be “excitable” still.
  5. Typos have unexpected manifestations (“fac ebook”)

Finally, I’ve enjoyed dabbling with Google Insights for Search and learning quite a lot along the way, both about how to use Insights and about the handling and decomposition of time series data. I intend the next article will describe some of the “how I did it” using R.

UK Government Open Standards Survey

The Government Chief Technology Officers’ Council has recently begun an online survey of open standards.

This covers views on the meaning of “open standard” as well as relevance of specific standards.

Obviously, CETIS people will be responding but I’d like to encourage everyone with an interest in open standards to do so. I will be keeping an eye out for what might have been missed from the survey. Be warned, however, it might take some time if you have interests spanning several standardisation areas.

ÜberStudent, Edubuntu – A sign of what is to come?

ÜberStudent is a new-comer (launched 2010) to the world of Linux distributions, being aimed at higher education and advanced secondary level students. Edubuntu has been around a few years longer. Both are based on the Ubuntu Linux distribution, which has a strong user-base among people who are not hard-core techies and who have migrated from Microsoft. A “distribution” is effectively a packaged-up combination of generic Linux code, drivers, applications etc.

As someone switched over to Ubuntu in October 2010, and with no regrets (I kept Windows 7 in reserve on the same machine but have never used it), I can imagine that ÜberStudent, or maybe a successor, maybe onto a winner. Consider some “what ifs”:

  • ubuntu makes headway with its offering for MIDs and netbooks and erodes the Android and iPhone territory
  • students look to save more money
  • students react against “the man” (after blithely paying for years) as part of a general reaction to the banking crisis and government policy
  • facebook, Apple or Google get too greedy or conceited.

I speculate that it is just a matter of time before “we” (staff in universities and colleges and their IT suppliers etc) need to grapple with a new wave of issues around user-owned technologies. How well would we cope if everyone accessed the VLE and portal (Sharepoint?) with Firefox and accessed powerpoint presentations or submitted documents/spreadsheets using LibreOffice? How much worse does this look when they are paying £6000-£9000 per annum?

Before ending, I should say that ubuntu is not all bliss – getting modern operating systems to work across diverse hardware and configurations is seriously difficult – but overall, it has been a good move for me. I just want a computer I can use for work, mostly email, calendar, documents, web…

So, what do I like about the change? NB these are a mix of features of the technical aspects of Linux/Ubuntu and consequences of the Open Source model. In brief:

  • Less friction – there are fewer times when I have to sit, waiting for something to happen.
  • Less conceitedness – I really dislike the way MS Windows tries to control the way you do things. This has got worse over the years. It seems you have to brainwash yourself to the Windows Way to avoid temptation to profanity.
  • More freedom – no-one is trying to “monetise” things. No I don’t want to change my search engine, use a particular cloud-based application, get my games from XXX. This is one of the things that stops me going Android.
  • Less memory use – less than 1Gb is more than adequate whereas my XP used to just swallow it up
  • More disciplined software management – if you stick to using the Package Manager

And what don’t I like? Only that the video driver often crashes if I change to using a monitor rather than the laptop screen. In general, hardware is a source of niggles and battery consumption is not so well optimised as for Windows or MacOS (Apple is especially good at this as they control the hardware and operating system).

In conclusion:

  • I recommend you try ubuntu; it can be tried by running off CD or a USB stick without “nuking” your current system or try installing on a PC that has just been retired (NB if you try a machine much older than 5 years the hardware driver support may cause problems)
  • Watch out for the future!

British Standards in ICT for Learning Education and Training – What of it?

The British Standards Institute committee IST/43 – ICT for Learning, Education and Training (“LET”) – has been in existence for about 10 years. What does IST/43 do? What follows is my response as an individual who happens to be chair of IST/43.

In the first few years, IST/43 had a number of sub-groups (“panels”) involved in the creation of British Standards. It was a time when there was a lot of activity worldwide and many new groups created. What became clear to many people in IST/43 was that a much larger number of stakeholders had to be marshalled in order to achieve sucess than we had thought. In essence: we generally have to work at international scale if focussed on standards specific to LET and otherwise appropriate generic web standards. At present there are no standards under development in IST/43 and all previous panels have been disbanded.

So: where does this leave IST/43? In addition to creating British Standards, IST/43 is the shadow committee for European and international standardisation in ICT for Learning, Education and Training. These are known as TC353 and SC36 respectively. IST/43 effectively controls the vote at these committees on behalf of the United Kingdom. Full European Standards are called “European Norms” (ENs) and automatically become national standards. International standards created in SC36 do not automatically become British Standards; IST/43 decides one way or the other.

I will continue with a summary of current work programmes in TC353 (European) and SC36 (international) and indicate for each work item what the current position of IST/43 is. If you are interested in any of these areas, whether agreeing or disagreeing with the position that IST/43 takes as the “UK position”, you can nominate yourself for membership of the committee (email addresses at the end). Strictly speaking, it is an organisation that nominates; committee members represent that organisation. Comments below on the members of the committee should generally be understood to be representative of nominating organisations.

European

Work item:

BS EN 15943 Curriculum Exchange Format (CEF) Data Model

Comment:

This work, which allows for the exchange of subjects/topics covered in a curriculum, originated from work undertaken in the UK with support from BECTa and has had active support from IST/43 during its standardisation. Voting on the final standard is underway (Feb 2011).

Work item:

BS EN 15981 European Learner Mobility Model

Comment:

Members IST/43 and others in their nominating organisations have been significant contributors to this EN, which matches the requirements of the Bologna Process and European Union treaties on recognition of qualifications across the EU. A formal vote on the final draft standard will end in February 2011.

Work item:

BS EN 15982 Metadata for Learning Opportunities (MLO) – Advertising

Comment:

Members IST/43 and others in their nominating organisations have been significant contributors to this EN, which harmonises a number of nationally-developed specifications for exchanging course information (XCRI in the UK). A final draft has recently been submitted to the secretariat of TC353 and should be out for ballot later in 2011.

International

There are three broad classes of activity in SC36: those that create full International Standards (denoted “IS”), those that produce lower-status Technical Reports (denoted “TR”) and study periods. Study periods are not enumerated below.

WG1 Vocabulary

Work items:

ISO/IEC 2382-36:2008/Cor.1:2010(E)
ISO/IEC 2382-36:2008/Amd.1:2010(E)

Comment:

ISO/IEC 2382-36 is “Information technology — Vocabulary — Part 36: Learning, education and training”. These are corrections and amendments. There is little interest from IST/43 but a “yes” was registered at the last vote.

WG2 Collaborative technology

Work item:

ISO/IEC 19778-4 (TR), Collaborative technology – Collaborative workplace – Part 4: User guide for implementing, facilitating and improving collaborative applications

Comment:

There is no participation from IST/43.

WG3 Learner information

Work item:

ISO/IEC 29187-1 (IS), Identification of Privacy Protection requirements pertaining to Learning, Education and Training (ITLET) – Part 1

Comment:

There is no participation from IST/43.

Work items:

ISO/IEC 20006-1 (IS), Information Model for Competency — Part 1: Competency General Framework and Information Model
ISO/IEC 20006-2 (IS), Information Model for Competency — Part 2: Proficiency Information Model
ISO/IEC 20006-3 (TR), Information Model for Competency — Part 3: Guidelines for the Aggregation of Competency Information and Data

ISO/IEC 24763 (TR), Conceptual Reference Model for Competencies and Related Objects

ISO/IEC 20013 (TR), e-Portfolio Reference Model

Comment:

This collection of work items is of interest to IST/43 and has attracted new committee members during the last year or so. Substantial engagement and commenting on drafts has occurred and it has been proposed to convene a panel under IST/43 to coordinate UK engagement and make voting recommendations to IST/43.

The Conceptual Reference Model is near completion but the other work items are in the earlier stages of drafting. At present there are areas where consensus has not yet been reached in addition to a substantial amount of work being required in drafting and editorial.

Work items:

ISO/IEC 29140-1 (TR), Nomadicity and Mobility Part 1 – Part 1: Nomadicity Reference Model
ISO/IEC 29140-2 (TR), Nomadicity and Mobility – Part 2 – Learner Information Model for Mobile Learning

Comment:

There is no participation from IST/43.

WG4 Management and delivery of learning, education and training

Work items:

ISO/IEC 19788-1 (IS), Metadata for Learning Resources – Part 1: Framework
ISO/IEC 19788-2 (IS), Metadata for Learning Resources – Part 2: Dublin Core Elements
ISO/IEC 19788-3 (IS), Metadata for Learning Resources – Part 3: Basic Application Profile
ISO/IEC 19788-4 (IS), Metadata for Learning Resources – Part 4: Technical Elements
ISO/IEC 19788-5 (IS), Metadata for Learning Resources – Part 5: Educational Elements
ISO/IEC 19788-6 (IS), Metadata for Learning Resources – Part 6: Availability, Distribution, and Intellectual Property Elements

Comment:

The essence of “Metadata for Learning Resources” (MLR) is an international standard that over-arches Dublin Core metadata (which is already an ISO standard in an older version than the Dublin Core Metadata Initiative currently recommends) and IEEE LOM (Learning Object Metadata).

Opinions on the MLR work vary in the UK and it has been the subject of many discussions over the last few years. A series of comments and criticisms have been submitted to the working group on part 1 and dealt with to the satisfaction of IST/43 such that a “yes” vote was registered for the final vote on part 1. Parts 2 and 3 are at earlier stages and have also attracted recent “yes” votes (although “abstain” has been registered in the past when no views were presented to IST/43). It does not follow that “yes” votes will be registered for all parts.

Work items:

ISO/IEC 12785-2 (IS),  Content Packaging – Part 2: XML Binding
ISO/IEC 12785-3 (IS),  Content Packaging – Part 3: Best Practice and Implementation Guide

Comment:

This work is effectively standardising IMS Content Packaging. IST/43 supports the work and has adopted Part 1 (the information model) as a British Standard (BS ISO/IEC 12785-1:2009). Extended comments have been submitted into the working group.

WG5 Quality assurance and descriptive frameworks

Work items:

ISO/IEC 19796-1 (IS), Quality Management, Assurance, and Metrics – Part 1: General Approach
ISO/IEC 19796-2 (IS), Quality Management, Assurance, and Metrics – Part 2: Quality Model
ISO/IEC 19796-4 (TR), Quality Management, Assurance, and Metrics – Part 4: Best Practice and Implementation Guide
ISO/IEC 19796-5 (TR), Quality Management, Assurance, and Metrics – Part 5: Guide “How to use ISO/IEC 19796-1”

Comment:

The position taken by IST/43 on these pieces of work is largely passive; there is not strong interest but they are recognised as being of potential interest to some in the UK and it is believed that they are not in conflict with UK requirements. Parts 1 and 3 have been adopted as a British Standard.

Work items:

[not yet approved] Quality Standard for the Creation and Delivery of Fair, Valid and Reliable e-Tests

Comment:

This item has not yet been voted on by participating members of SC36 but the work item proposal is well developed and has been championed by a member of IST/43. This will be a proposal from the UK to SC36. See also a previous article I wrote.

WG6 Supportive technology and specification integration

Work items:

ISO/IEC 24725-1 (TR), supportive technology and specification integration – Part 001: Framework
ISO/IEC 24725-2 (TR), supportive technology and specification integration – Part 002: Rights Expression Language (REL) – Commercial Applications
ISO/IEC 24725-3 (TR), supportive technology and specification integration – Part 003: Platform and Media Taxonomy

Comment:

There is no participation from IST/43.

WG7 Culture, language and individual needs

Work items:

ISO/IEC 24751 Part-9 (IS): Access for All Personal User Interface Preferences
ISO/IEC 24751 Part-10 (IS): Access for All User Interface Characteristics
ISO/IEC 24751 Part-11 (IS): Access For All Preferences for Non- digital Resources (PNP-ND)
ISO/IEC 24751 Part-12 (IS): Access For All Non-digital Resource Description (NDRD)
ISO/IEC 24751 Part-13 (IS): Access For All Personal Needs and Preferences for LET Events and Venues (PNP-EV)
ISO/IEC 24751 Part-14 (IS): Access For All LET Events and Venues Description (EVD)

Comment:

This work has many roots in older IMS work on accessibility and the revisions are being fed back into IMS. Access for All has been actively contributed to be a member of IST/43, although his continued participation is in jeopardy due to lack of funding. Other members of IST/43 support the work but are unlikely to have the capacity to directly contribute.

Work item:

ISO/IEC 20016-1, ITLET – Language Accessibility and Human Interface Equivalencies (HIEs) in e-Learning applications: Part-1: Principles, Rules and Semantic Data Attributes

Comment:

There has been little participation from IST/43. A “no” vote with strong comments was agreed at the last IST/43.

Any work where “no participation” is stated will attract abstain votes without comment from IST/43 and is unlikely even to be discussed at committee meetings.

Tailpiece

Although the above indicates that there is currently no work on a British Standard in IST/43, the committee has discussed a new work item to create a British Standard that implements “BS EN 15982 Metadata for Learning Opportunities (MLO) – Advertising” along with an XML binding and vocabularies for use in the UK. This completes a cycle where the work of the JISC-funded XCRI projects was contributed into the EN process; the EN represents a core common language that each member state can conform with and extend for its local needs. Once the EN is approved, an “acceptance case” will be presented to BSI for this new work.

For more information:

This article is my own words and, although I believe it to be accurate, I ask readers to recognise that it is not approved by IST/43.

Anyone interested in joining IST/43 should contact the committee chair (me, a.r.cooper@bolton.ac.uk) or committee secretary (alex.price@bsigroup.com).

Whither Innovation?

Whither innovation in educational institutions in these times of dramatic cuts in public spending and radical change in the student fees and funding arrangements for teaching in universities? The pessimist might quip: “wither innovation”. It seems like a good time to think a little bit about innovation; I seem to be looking at it from another angle of late.

It seems to me that innovation always follows adversity, that “necessity is the mother of invention”. At one level, then, just as mammals diversified following the K-T Extinction Event, so innovation will occur. I would, however, rather see innovation without extinction, a future more like horticulture than cataclysm. I want Innovation to be about opportunity not necessity but we are where we are, and an element of necessity is now with us. This  is what this post is about and I detect related sentiments in a recent blog post from Brian Kelly following the recent  CETIS Conference, which he entitled “Dazed and Confused After #CETIS10“.

Innovation theorist Clayton M Christensen coined the term “disruptive innovation” to refer to the way apparently well-run businesses could be disrupted by newcomers with cheaper but good-enough offerings that focus on core customer needs (low end disruption) or with initial offerings into new markets that expanded into existing markets. Disruptive innovation threatens incumbents with strategy process that fails to identify and adopt viable low-end or new-market innovation. In our current context of disruption by government policy, this challenge to institutional (university) strategy is acute.

A short aside… In order to better understand these potentially disruptive innovations (or opportunities to weather the storm), we (UKOLN and CETIS with support from JISC) have made available an online tool to gather and allow ranking of such innovations. This will be available until about December 10th 2010. Please participate and disseminate the URL widely: http://tinyurl.com/disruption2010. [this survey is now closed; the report will be published in Jan/Feb 2011]

Strategy alone is inadequate; institutions need the capability to innovate in practice.

The Capability to Innovate

It seems clear that universities need to adopt innovations but what does this require? Could we leave innovation to the commercial sector and buy it in? Is there value to be gained in the sector through  JISC and other bodies supporting innovation?

I recently came across some accounts of studies into commercial sector R&D that seems to indicate that individual institutions should continue to have in-house innovation. There appears to be no a-priori reason to object to application of the argument to our context: universities innovating in their teaching and learning offerings (I intentionally omit mention of research). The most interesting account I came across is Cohen and Levinthal (1990), “Absorptive Capacity: A New Perspective on Learning and Innovation” Admin. Science Quarterly 35(1).  The abstract to this paper is:

“In this paper, we argue that the ability of a firm to recognize the value of new, external information, assimilate it, and apply it to commercial ends is critical to its innovative capabilities. We label this capability a firm’s absorptive capacity and suggest that it is largely a function of the firm’s level of prior related knowledge. […] . We argue that the development of absorptive capacity, and, in turn, innovative performance are history- or path-dependent and argue how lack of investment in an area of expertise early on may foreclose the future development of a technical capability in that area. We formulate a model of firm investment in research and development (R&D), in which R&D contributes to a firm’s absorptive capacity, and test predictions relating a firm’s investment in R&D to the knowledge underlying technical change within an industry. Discussion focuses on the implications of absorptive capacity for the analysis of other related innovative activities, including basic research, the adoption and diffusion of innovations, and decisions to participate in cooperative R&D ventures.

My hypothesis is that the same argument applies to universities in relation to their teaching and learning business if “R&D” is re-conceptualised as action research integrated across the following three areas:

  1. Pedagogy, specifically the application of unconventional models for supporting learning, whether “new to us” or “new to the 21st century”.
  2. Organisation, the way people and functions in institutions are structured and how they work should not be taken for granted. This isn’t about “business process re-engineering”, rather a more systems-thinking point of view.
  3. Technology, appropriately applied as an enabler.

For each of these, there are practical issues to work around – my spectacles are not rose-tinted – but this makes more clear to me the need for in-context innovation-oriented activities. Many of these issues are messy, complex or “wicked problems”. As Laurence J. Peter said: “Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them.” This sounds like a case for conversation, dialogue, debate, … ideally also collaboration.

My top issue for each area is:

  1. Pedagogic innovation is limited the conservative view society at large has of valid educational activities and measures of achievement. This is compounded by limitations imposed on universities that arise from policy decisions in response to societal conservatism.
  2. Deep discussions on organisation are not the norm and Fordist assumptions abound.
  3. Technological considerations are generally not well integrated into institutional strategies; IT is too often an afterthought servant not a bed-fellow.

So… I think there is a case for intra- and inter-institutional innovation activities and conversations about them and I hope some of the above and the ongoing activities CETIS, UKOLN and JISC undertake and support will help. “Absorptive Capacity”,  the “ability to recognize the value of new information, assimilate it, and apply it”, seems like an important concept to hold on to.

Oh, and please visit http://tinyurl.com/disruption2010,

Is there a Case for a New Information Literacy Inspired by …?

The problems that have to be solved in the 21st century to maintain or increase human health, wealth and happiness are highly complex. By “complex”, I mean that they are highly interconnected and impossible to understand accurately by looking at influential factors in isolation. Divide-and-conquer strategies and narrowly focussed-expertise are inadequate unless re-integrated to understand the bigger picture. This state of affairs is currently reflected in much research funding but isn’t just a concern for researchers.

Professionals in almost all walks of life will be faced with making decisions on matters for which there is little precedent and a shortage of established professional practice to fall back on.  There is, and will be a growing need, for professionals capable of drawing on information and adapting to the paradigms of multiple disciplines.

The trend in supply of data from both research and public sector communities is clearly in the direction of more information being provided under suitable “open data” licences and employing basic semantic web techniques. This resource, not confined by disciplinary boundaries or utility to specific lines-of-argument, has great potential value in answering the complex and novel questions required to navigate humanity through the complexities of sustainable development. I contend that realisation of the potential of this information is contingent on a new information literacy, specifically a new digital literacy if we are concerned with open data on the web or otherwise.

Whereas the use of historical data is well known in research in many established disciplines a new information literacy is required to realise the potential noted above that is not limited to academic research, that uses data disembodied from the narrative and argument of the journal article and that transcends the limit of the established discipline. The challenge for the education system is to prepare professionals of the future (and helping professionals of today adapt through appropriate work-based learning) with this new information literacy. This “new information literacy” requires a deeper and more explicit understanding of models employed within and outwith a professional’s “home” discipline and the embedded epistemology of that discipline.

The philosophy of General Semantics and the practices advocated by Alfred Korzybski and subsequent thinkers are of interest in that their focus is on “consciousness of abstracting” as a means of avoiding the conceptual errors often made in interpreting linguistic acts or experience of events. Rather than making an assertion that General Semantics is “the answer”, indeed it certainly contains fallacies and unsubstantiated ideas, I suggest that it offers some valuable insight into the mental habits that can improve the ability of professionals to work across disciplines, whether using Open Data from research and public sector sources or not, to answer the questions of tactical and strategic character that sustainable development requires.

Neuro-Linguistic Programming (NLP), founded as a movement in the mid 1970s and with clear links to Korzybski, contains some further useful ideas if one looks beyond the psychotherapeutic dimension. My intention is not to develop a detailed position in this post but to suggest that there are some practices/habits advocated by Korzybski and others that offer a resource for us to consider. Some of the maxims and practices that I think are candidates for education in the new information literacy, hence also a new digital literacy, are:

  • Korzybski’s “extensional devices” are practical habits that stress relationship between things (as opposed to things defined in isolation)
  • Gregory Bateson in “Mind and Nature, A Necessary Unity” presents a number of pre-suppositions that “every schoolboy knows” (sic) that are actually more representative of gaps in thought.
  • The meta-model of NLP provides a set of heuristic questions to identify distortion, generalization and deletion in language. These kind of questions are potentially useful when working across disciplines to reduce the chance of false-reasoning.

I will now complete the title, where the ellipsis left off: “… General Semantics, Neuro-Linguistic Programming and Gregory Bateson”

Workshop on Global eBusiness Interoperability Testbed Methodologies

I just caught an announcement of a new CEN Workshop that looks rather interesting from an interoperability point of view.

“The Global e-Business Interoperability Test Bed project (GITB) focuses on methodologies and architectures that support e-business standards assessment and testing activities from early stages of eBusiness standards implementation, to proof-of-concept demonstrations, to conformance and interoperability testing.” – from http://www.cen.eu/cen/Sectors/Sectors/ISSS/Workshops/Pages/Testbed.aspx

The business plan is ambitious, to say the least, but even modest advance in technical terms could have significant practical benefit. I certainly see the future of interoperability standards in education being meshed with achievements of global eBusiness interoperability as well as global web standards.

Biometrics Code of Practice – Draft for Comment at the British Standards Institute

A draft for public comment of a new “Publicly Available Specification” (PAS) is available until October 5th. PAS’s are not full standards and have not been through the same level of (time-consuming) consensus-assurance as such.

The full title is “Code of practice for the planning, implementation and operation of a biometric system” and it is otherwise known as PAS92:2010. This PAS looks like a helpful guide to a complex legal, technical and ethically-contentious area (I am no expert to comment on its accuracy).

Comments may be made via the BSI drafts website, specifically for PAS92 at http://drafts.bsigroup.com/Home/Details/591. You will need to register.