Twenty First Century Computing


Preface


    Computing and Information technology got part of our everyday life within a generation. This is a dramatic change even more profound and faster than the industrial revolution. Naturally it raises several important questions which are being discussed and will be discussed in several forums.  I think the most important question is to answer how computing can support the development and strength of our society and how people can use it in the most effective way.
    What differentiates a society from a group of individuals that members of a society can afford specialization and is able to cooperate in a very effective way. Naturally it needs rules and bounds to stream energies into the right direction and protect the society against non conforming individuals.  However there is a single major benefit which is makes worth all this effort to build and maintain a society; the sharing of knowledge.  A group of effectively cooperating people having different skill sets and knowledge is more effective than any expert. (See „The Wisdom of Crowds” by James Surowiecki;    http://www.youtube.com/watch?v=U0LhDQD7-ms)
    It doesn’t mean that we don’t needs experts, scientists and gurus.  However what we need is an environment where those gurus can concentrate on what they really know the best and let others to deal with the „details” where they can excel. That is cooperation and sharing best practice.
    However IT is still often run by gurus. To make a new report, implement an algorithm or put together a website we often rely on gurus.  That is bad because software is nothing else than packaged knowledge (See: „The one minute risk assessment tool” by Amrit Tiwana and Makr Keil; http://www2.cis.gsu.edu/dmcdonald/MBA8120/Session5/RiskAssessmentTool_CACMFall2006.pdf) and if you can’t package and publish this knowledge you suffer a disadvantage. Other disciplines have similar problems (e.g. to publish a book you need a publisher) but the promise of internet is the easy access and delivery of information. (See „The long tails” by Chris Anderson; http://www.youtube.com/watch?v=LlAZ9t2m7-E&feature=related )
My objective with this paper is to explore to opportunities and tools for more effective knowledge share.

Methodology
    The simplest methodology is used, I simple go through the traditional, existing and potential methods of knowledge share and draw some conclusions. In Internet I focus on text documents as a primary means of knowledge share and see new methods as an extension of it.  This is a deliberate limitation, although other, very successful methods of knowledge share exists (e.g. eLearning and Video) I think discussing them won’t add much to the topic at hand. Maybe next time.
Knowledge Share
The Traditional Way
Since prehistoric times sharing knowledge is the basis of civilization.  That’s not a surprise, people started on live in groups because so they could share task, combine their skills and so be more effective.  However it’s not possible if they don’t share their knowledge.  There are two basic ways of sharing knowledge; the first by using the knowledge for the benefit of others (for some – usually – material direct or indirect reward), the second is to teach others to things you know.  In the latter case the reward is often more indirect.  
The thing we call job, or work belongs to the first category, Books, teaching, practice, conferences are in the second category. Work groups are somewhere in between.
Books
Historically using books for information access got widespread only at the end of XIX century.  Before that it was only a privilege of a selected few. Up to now books are the most used source of information.  For a physical storage media they have relative large capacity, mobility, are durable, easy to handle and usually give a nice feel. On the other hand compared to electronic media they space requirement is large, they are relative hard to search and read only; it’s hard – albeit possible – to add personalized content to them .
Teaching is a more tradition way of knowledge share, actually it exists since the beginning of mankind, and parents always teach their children.  Even the most primitive culture uses teaching as a knowledge sharing tool. Teaching is very effective (more effective than books) mostly because it has immediate feedback, can be tailored to the people receiving knowledge, and it’s easy to combine with other methods; books, exercises….  However personalization is what limits the power of teaching; with the number of participants its effectiveness is declining, it needs infrastructure, and participants have to make an effort to be at the same place in the same time. Even if it’s one to one it requires constant alignment and concentration form pupil and teacher (you can set a book aside if you e.g. feel tired, a lesson will be lost if you can’t concentrate).
Practice
Learning by doing is a thing we always do, however not very effective. However learning by doing is very effective if a tutor can support the process.
Conferences
Conferences are very similar to teaching with the difference that it’s usually about new, not yet settled knowledge and the share of knowledge is more two way. The presenter not only wants to share what she has found, developed, concluded… but also is interested in the feedback of the audience.
Work-groups
Work-groups and teams are very popular to solve complex tasks because members can deal with the tasks with different points of view using their special expertise and knowledge. However this efficiency comes at a price; there is always an overhead even where the cooperation culture is strong (and in some countries or workplaces it’s not the case), administration is needed to manage the work, meeting has to be organized, task has to be broken down and somebody with ample authority has to deal with the group dynamics.  A wrong team can be contra productive.
The Modern Way
eBooks and electronic publications
Electronic books are the next generation books. Nowadays everybody – who has some valuable topic to write about - can compose a document and publish it in practically any format.  However that doesn’t guarantee accessibility (in contrast a publisher guarantees that it will at least distribute your book in its shops), this I will discuss later.
The other point is that a eBook could and should deliver more than a “normal” book.  It can be put in context i.e. reference to other materials, text, videos, dictionaries, thesaurus, the user can add and share his comments, notes and bookmarks.
In other worlds content enrichment is key element in digital publications; this is an important added value compared to traditional publications.  The following content enrichment methods and tools are feasible:

  • References: links to other documents, publications, websites.
  • Dictionary: in a narrow sense it’ allows interpreting, translating and exploring words found in the text. In a broader sense it allows to navigate through a thesaurus and explore a broader set of related documents.
  • Search: a good solution would not only look up terms in the document itself but look for results also in the referenced documents.
  • Similar documents: look for similar documents, web pages… etc.
  • Text/Data mining: a clever algorithm could find keywords, concepts, assessments and data in the text. This would help to formalize intrinsic knowledge and also give good feedback to the author.
Distant learning
Distance learning is – in may view – is very similar to traditional learning only that it’s able to break over geographical and time barriers.  Personal contact is and stays the most powerful way of teaching but it’s also very resource intensive (buildings, organizations, travel, administrative overhead…).  A good mixture of those can be the most effective way of learning.  
(Distant learning is very similar to telework.)
Software
Software in its core is nothing else than knowledge; structure, work flow, algorithms, rules… Plus content provided by the user(s).  The history of the last decades shows how powerful this concept is. However there is one serious difficulty; the implementation of knowledge is tedious, resource intensive and requires a programmers who must understand the knowledge to be packaged. That‘s a major obstacle because sometimes even experts have difficulties to formalize they knowledge not mentioning to transfer it to professionals of another field.
What we would like to have is something like an application blog e.g. where a tax expert could not only write about the new tax calculation method but also publish a tax calculator for the purpose. This question of user programming is also important in traditional (like ERP) systems.
Web
It doesn’t make much sense to praise the importance of the Internet. Others have done it very profoundly and we all are aware of it. What is important for us that while the Web opens the door for easy publications and knowledge sharing for the masses it also raises questions concerning the deluge of information, credibility and security.
Blog
Blogs are a simple and easy way to share thoughts and follow events. However by nature blog entries are short, rather like a piece of news.  Blogs can raise interesting thoughts, questions, direct interest to a topic but can’t give deep knowledge.
Wiki
Wikis are probably the best way to collect knowledge in a certain area (see Wikipedia).
Website
A website is a complex thing, it can contain any kind of content or application.
The Twenty First Century Way
Semantic Web
The semantic web (sometimes called Web 3.0; „The semantic web” http://www.youtube.com/watch?v=OGg8A2zfWKg ) is an aspiration to give dumb content a meaning. While a text on a web page has no meaning for the computer the semantic web can classify text, sentences or keywords, according to some classification. For example the sentence “Semantic web is a new computing paradigm.” has no meaning for a computer. However if you attach the classification “information technology” to it than computer will have an easier job to attach search results, related articles, user preferences to it.
The difficulty of semantic web is the construction of the semantics themselves.   First it puts a “burden” to the publisher who has to enter the semantics into the content. Second it needs a common understanding and structure of semantics (i.e. metadata).  I think that these obstacles also give the limits of the semantic web.  People publishing on Web will give keywords, labels and some metadata to their texts (or other content) but it’s not very probable that they will be able and willing to use a complex thesaurus for every word or other pieces of content their produce. Next I don’t think they will ever be a common thesaurus for every domains of knowledge. Rather there will be several different thesauruses for every domain it’s not very probable (and neither desirable) to have one single “structure of knowledge”.
And last the semantics will be produced by humans so it seems that we just push the problems to a higher level. If people would phrase the sentences more clearly no semantic web would be needed, search engines could retrieve the semantics without major difficulty. (Although the relative importance of text on the web is declining I discuss only text publications here. However the some rules apply for pictures, videos and applications in general.) And actually this is happening now. (For a good example see http://www.silobreaker.com/ )
To summarize the semantic web indeed makes sense, it would help to increase the value of content, improve knowledge share in certain domains significantly but it won’t bring a fundamental change in the way and efficiency we use the Internet.
Problem databases
Wikis (especially Wikipedia) are an excellent and very popular knowledge sharing tool. However it’s passive content only. What I would prefer is a Wiki which is also support algorithms in a uniform way. Such a database for example not only describes the traveling salesman problem but also implements the algorithm in a standard manner.
Publication platforms
There are plenty of web publication and content management platforms. However there are limited for work-groups and companies. We need something more open and common much like a social network.
Access to knowledge
The credibility problem
Credibility is a long standing problem of knowledge sharing – just think of fake degrees or plagiarism scandals. However couple of decades ago you could trust the sources of information or knowledge to a certain degree. Education institutions had to meet accreditation criteria, newspapers employed well educated, experienced journalists with a track record and professional books were reviewed by an editor. In general there was enough stability and barriers of knowledge sharing that only professionals could afford to share knowledge on a large scale and credibility was easier to check and control. Naturally this world was far from ideal, it was just much easier to find your way among the sources of information. Now not only the amount of available information exploded but so did the sources of it as well. New things arrive and new sources of information appear and disappear. How do you know which to trust? Is page rank a good measure of credibility? Or are much "liked" pages credible? They are a measurement at least, but not too reliable.  From the publishers/authors point of view credibility must be build, managed and protected even more carefully on the Web than in life. In life you can expect that time heels your early mistakes, on the Web these mistakes may be only a click away.  Character is even more important than before even if that character is virtual. I don't think it's easy to maintain a virtual character different from your own for a long time, not to mention moral.
As a reader it's even harder to decide which source to trust. Some sites have a well earned reputation but several not. It would be good to build a credibility index (where the creditors are measured as well) like digg, del.ico.us, StumbleUpon.. and so on.  A credibility rank can be also imagined similar to page rank. The credibility rank (CR) could depend on sources referred; a higher number of credible sources referred would result in higher level of credibility, and agreement; CR of a site is higher if other (credible) sites dealing with a similar topic come to comparable conclusions.


Naturally every human and organization wants to build goof relationships but it’s time to put credibility on your agenda, be credible also for people who you even don’t know!


Shops
Subscriptions
If you register to a website and pay for the access you have a subscription exactly like to a newspaper. Newspapers are actually information brokers, they collect, select, organize and distribute content. There is a broad specialization among publications from general magazines down to very specialized small circulation publications.  As a reader I can expect certain quality regarding the level and spectrum of articles.  
There are two problems with this model; the first is (revenue) free access via search engines and content aggregators and second the limited personalization.
Free access may make traditional journalism unsustainable, which is problematic because it may lead the loss of the values of traditional journalism. Although free journalism (e.g. blogs) often give better information on certain topics and details but professional journalists and editorial teams can give a better overall quality and coverage. (No wonder that as time is going on the structure of web site teams are more and more similar to traditional editorial organization). However free information is threatening this model and we may lose something on the way.  If there is need for quality journalism in the future I expect that content creation and content aggregation will come apart in the future and done in separate businesses in the same supply channel.
The limited personalization is the other problem.  In a media business a venture either can publish a journal with broad coverage or a series of specialized journals. Both have its natural limitation; either we try to reach everybody and not really satisfying anybody or go into every niche which is not very economical bellow a certain level.
Web sites try to resolve the limited personalization problem by giving the user freedom to select from different sources and styles. This approach also has its limitations:

  • Customers may find difficult to find all relevant sources
  • There is no system for source qualification (e.g. topic, subtopic, quality, reliability, length…)
  • The solution is static; for the user it narrows the potential sources and type of content which will be a barrier for the user to broaden her knowledge and find new, interesting information.
I suggest a three level model:

  1. Content providers
Content providers produce content (inclusive applications) probably in several format (long study, professional article, poplar article, news) and languages, and also take care for updates if needed.
Secondary content providers may also be involved e.g. translators, experts doing content enrichment, programmers implementing algorithms and models…

  1. Content aggregators
Content aggregators qualify the incoming content, regarding topic, subtopic, geography, importance, quality, reliability, importance, style, length, difficulty…   

  1. Customer support system
User may access the content typically through a portal and give their preference accordingly. However a recommender system makes a deeper profiling, even considering the time (e.g. during the day, in the evening, on the weekend, winter/summer…  It also takes care to propose new type of information and the opportunity for feedback.
eShops
Strictly speaking electronic shops are only a way to sell you something. Basically there are two way of shops; the large shopping mall and the small grocery at the corner. Shopping mall became popular in the seventies and eighties but now it seems that they are over their life cycle. (See: http://www.economist.com/node/10278717?story_id=10278717 or http://deadmalls.com/)  Small shops lost a lot of their appeal in the recent decades and we don’t know whether there will be a revival. However they still have the advantage of personal touch, provided the personal is doing its job well.
Web shops don’t have physical limitations but they are even more impersonal than a shopping mall. And exactly that is the point where improvement can and should be done. The first step to make it happen is building credibility. Users have to trust your site, the information and product we sell.  Next we have to offer quality. Quality means beside of 24x7 hours of operations, speed and hit rate also familiarity, that is the feel of being home, finding our way easily among the zillions of products, actions… etc.  I think we should be careful with personalization it can step by step narrow the choices A logical step would be to mix small shops and web-shops to give personal experience and the joy of shopping while keeping the huge selection and easy access to web-shops.  

Appstore
Technically it’s not easy to understand the popularity of Apps.  A well written web app can give a similar user experience without all the hassle of implementing, porting, installing, updating… a specialized application.  On the other hand apps have some benefits; they can use the local resources better, providing a better user experience and also work off-line.  However I think that the feel of ownership is at least as important. (Unfortunately I haven’t yet seen any proof or analysis about it.)
Community
Social networking is enormously successful.  What I miss is a community site which supports real collaboration and can support  - even ad-hoc - work-groups efficiently or crowdsource tasks. Social networks have the potential for it but they still have to deliver.
Search
Search is a primary access interface to the web now. It’ success is due to the huge amount of information which can’t be effectively managed with other – more manual – methods.  Search is automatic way to access content and as such is very new. It’s widespread use drives innovation in search and as a result search is partially replacing other access methods;

  • Content qualification: search engines are able do differentiation between more and less relevant content.
  • Navigation and linking: search engines are able to collect similar documents and give references to them. It also means that search engines are able to build taxonomies and recognize keywords.
  • Query/question: although search engines are not prepared to answer questions their query analysis features are good enough to interpret questions and return relevant content.
  • Personalization; search is able to recognize personal preferences in search.
Search has huge advantages:

  • Unlimited access; search engines can process any amount of information. (At a cost, but it can be done).
  • Fast update; new search engines analyze and index new documents within hours after publication (or even faster).
However there are some drawbacks too:

  • As indexes are made by machines their intelligence is limited
  • Targeted search is often difficult: search engines try to give you more options and so always broaden the results even if you want’s to make it more specific.
I think search will remain a dominant interface for information access also in the future and stay at the frontier of development.  What will probably change is the search interface; search will be more often “disguised” as navigation and linking or even mimic a natural language interface
Navigation
Navigation and linking is the classical web interface (web documents are hypertext documents containing links to other documents), in the first version of HTML linking was the preferred and only way to look up information.  However this is not very effective if you wants to look up information, because it’s time consuming to go through all the links which not inevitably leads to the information which you are looking for. So portals emerged (which are nothing else that documents containing organized links) to structure content and ease navigation.
Linking and navigation have the following benefits:

  • Navigation gives an easy to understand and straightforward way to access relevant content
  • Linking connects related information; it gives deepness to information, allowing staying on a higher level or digging deep as far as you want.
In general linking and navigation supports information discovery, navigation on the concept level, linking on the document level.
However linking and navigation have its limitations as well:

  • It’s manual work which is time and resource consuming.
  • Quality is mutable
  • Targeted access – when you are looking for a certain piece of information  - is tedious
Query
Research trail
In the time of the big discoveries explorers had to beat through unknown territories and be prepared for the unexpected, face dangers, overcome obstacles, fail and restart – all this with an unbelievable effort.  People who followed them had a much easier task, they could follow their “trail” of their predecessors.
Research trail is the path a researcher follows during her journey through information, documents, tasks, ideas, experiments… Recording the trail of the research is very useful to document how one came to a conclusion and what steps had he done.  This is a very common thing, almost everybody doing research take notes, logs…  
If we do research on the Internet (in practice all of us do) we navigate/search from site to site. Technically it’s no difficulty to log the trail of our research.  Let’s assume that we can share this trail with others and there is a database of such logs.  Let’s also assume that if you enter a search expression or open a website a tool shows you in which direction other people entering the same search expression or opening the same website “went” from there. You could see the major routes and also where they lead. E.g. by entering the word “Chickago” into the searchbox the most popular, travel, hotel, culture, history, political sites would be displayed to me.  (It’s also called social search. This is a good example, albeit the site seems to be dead: http://blog.researchtrail.com/ )
Personalization   
Personalization helps people to define their particular needs and interests and have more effective access to knowledge. However it also may confine the user to the sources of information he used before and this can be contra productive because 1. People like  (and need) to discover new things. Sources and people change and personalization may damp this process.
Recommender
Recommender is in essence automated personalization. It requires no extra effort from the user and is more flexible. On the other hand its use is limited to showing information which may be of interest for the user.  With time there will be diminishing difference between personalization and recommender.
Web database
Database provide effective access to structured data.
New business models
Advertisement
Now this is the business model of the web for knowledge and information share.  Google and other large web portals use this model. This is not a new thing, broadcasting (TV, radio) and newspapers also rely on advertisement as the primary source of income.  Web has the advantage that it can align adds to context and user preferences providing an even more targeted adverts.
Although a very straightforward and clear model, adverts also have their drawbacks:

  • Ads take away time and place from real content
  • Adds income is – often – realized for  search companies and portals and not for the primary content producer
As we see the first problem is the concern of the customer. For them the solution could be a subscription service. The second hits the content providers; in a similar manner they also can ask a fee for “reusing” their content.
Donation
That’s how Wikipedia works and this provides the best user experience (the only ads are the ones where the site asks for donations).  In my view the only limitation of this model is that people – who earn their money in a different way – spend money only on projects which they think serve some public good. (And they have fairly right of doing so!).
Web shop
Appstore
Apple made this kind of distribution popular and now everybody who has an acceptable market presence is building something similar. The benefits seem to be obvious; developers have an excellent distribution channel, there is a certain level of quality and security maintenance, the applications is tailored to the platform and customers have a broad selections and the feel of ownership.  Modern applications can cleverly share local and cloud resources too.
Technology
Databases
Databases give strong structure to data which opens the door for accessing and processing large amount of information and transform raw data to information (e.g.  through data mining). The price – no wonder – is the lower flexibility and increased investment into knowledge.   If data embedded in text could be made explicit, the access to knowledge could get a boast. There are several ways to do it, here some examples:
Data recognition and exposition
With refined linguistic knowledge the data (places, companies, numbers, qualifiers… etc.) can be recognized in text.  My favorite example is Silobreaker (www.silobreaker.com) which is able to categorize data found in text.  In fact Silobreaker is rather about classification and connecting related information than date exposition, but it show how hidden information can be made visible to a broad audience. However even this wonderful solution is can’t make all data visible and sometimes makes mistakes.  The solution could be to make every data explicit by creation.
Semantic Web
Simple speaking the objective of the semantic web is to give meaning (i.e. expose) the data embedded in the text. Using that data new information sets can be generated, complex searches are possible among different web pages and domain and automatic agents can look up complex information. 

Semantic Web is not a new concept and considerable effort has been put into it.  Still – in spite of some good examples – I can’t see many semantic web applications. It doesn’t mean that the semantic web doesn’t make sense or that there is no progress. I just say that the progress is slow and maybe we miss something. What it can be I will discuss later.  (See:

Wikidata
Wikidata is a good example of the semantic web in practice. We also can see that a strong and well organized community is setting up relative modest targets.  In the first step the objectives is replacing data found in texts with links to a database and share it among entries. The practical benefit is that translations can use the same database. The future is not yet clear – there is not such a rigorous target setting as in case of the semantic web, however once the database will be there it can be used as source of automatic processing and further information services.
The Wikidata project is very interesting because it set very realistic targets i.e. it will be successful and second during implementation it will face exactly the same problems which prohibit the fast spreading of the semantic web. It’s a real word experiment.
Web Database
The semantic web and Wikidata both start from text. However the reverse approach is also possible i.e. building a database of a domain (e.g. company information, hotels database or legal database). Several of such databases exist and used everywhere.  This approach naturally also has the classic limitations; they are usually confined to a geography, language, domain of knowledge and modeling approach.  They are also often limit access to subscription.

http://meta.wikimedia.org/wiki/Wikidata
http://techcrunch.com/2012/03/30/wikipedias-next-big-thing-wikidata-a-machine-readable-user-editable-database-funded-by-google-paul-allen-and-others/
Software tools
People need new kinds of software tools to publish their knowledge in more structured form than text, pictures or video.  Two major direction seems to be feasible, the first tries to make programming as easy as possible, the second tries to implement tools which are near  to the tools professional – who are not programmers – use.
Encapsulation
The major advantage of software is that it can encapsulate knowledge for the user.  While text (picture and video) can very good of transferring knowledge (although software also can use very well for the same purpose) software makes knowledge usable  even if the user don’t know the details of the solution, while text requires you to at least go through an algorithm. (The difference is like receiving all the parts and full documentation to assemble a car or receiving the car ready made). The ideal solution would be to mix code and text – which can be done e.g. with HTML but not naturally.
Scripting languages
Scripting languages# are very much like traditional languages; de major technical difference is that scripting languages are interpreted and not compiled.  However it’s not easy to give an exact definition of scripting languages; in general they try to make small programming tasks easier and faster and often are made to support a specific task (e.g. shell programming) or be a “glue” language to connect different tools, resources and “heavy” languages.
Scripting languages are better suitable for user programming than “real” programming languages but they still programming languages. They provide a higher level of abstraction and are easier to use but the difference to programming is not significant.
. perl    python    javasript    tcl

Google trends show a slowly diminishing interest for scripting languages.


Functional languages
Functional languages# differ from “traditional” programming languages that they describe functions rather than imperative instructions. That makes them very effective especially in expressing recursive operations.  Well known functional languages are Common Lisp, Scheme , ISLISP, Clojure, Racket, Erlang, OCaml, Haskell, Scala and F#.



[1] http://en.wikipedia.org/wiki/F_Sharp_%28programming_language%29   
Functional programs are very useful in certain areas and for educated programmers of professionals, but they are not very suitable for people who are not programmers.
  Functional programming
Domain specific languages
Domain specific languages are by nature very powerful in the domain they are written for. This may be very tempting for specialists of the field to invest into the learning of the language.
Macros
This is a simple thing, but I wish that equations could be exposed to a solver.  E.g. if I write the tag <EQ>y=sqr(x); y:”square”,x:input” <\EQ> it will be exposed to a simple input field called “input” and a result field called “square”, the style defined in stylesheet.
Decision table
Decision tables# are the next step from data to algorithm. The advantage of decision tables is that they are easy to construct (-provided that the appropriate knowledge is available) and easy to use. Decision tables can also be nested or extended with other knowledge representing tools.  However it’s not easy to “grow” a decision table above a certain complexity.#
Decision trees
Decision trees# are similar to decision tables, the major difference is the graphical representation.  
Genetic algorithms
I don’t feel confident enough to analyze the possibilities of genetic algorithms and similar tools (e.g. neural networks, petri nets, simulated annealing, simplex, …).  The important thing is that it is possible to make tools which – if provided with appropriate input information are able to “find” solutions to complex problems.
Agents
Agents are software standalone applications which can solve different problems on behalf of the user (which also means they “know” the user). They collect information which is important for you, or do task on your behalf.  To be effective they need a large amount of semantic data or a lot of artificial intelligence. Although the notion of Agents (they are several types of software agents) is around quite a while I don’t see many examples of personal agents around. (Simple agents like newsreaders do exist). software agent#


Personal computing
Personal computing is changing significantly (just think on the success of tablets).  There are two directions of development.  

  1. Personal computers start to dissolve and taking new forms. I think on devices like game controllers, smart phones, television sets. These devices will take over several functions from today’s computers but in essence they will be consumer devices.
  2. Those who produce content will need tactile input also in the future and full computing capabilities (word processing, calculation, programming, video edition…). These computer will also undergo a very intensive development however they will resemble stronger to laptops as we know them. They just will be lighter, flatter; will have much longer battery life, easier use and mobility, flexible connection to cloud services.
Server computing
Servers will move to cloud and data centers will disappear.
Performance requirements
For “content producing” computers more flexibility is expected. While they will sacrifice performance for long battery life when they have no connection to power and to the internet, performance will boasted by cloud services and local resources will be used intensive when the computer is plugged. Computers will learn the users usage profile and adapt to it.
Cloud
The challenge of the Cloud to integrate seamlessly with personal computers that the user can use cloud services and locate resource hungry operations to the cloud while be able to run effectively when offline (although it will happen not very frequently in the future).
Assessment
Which is the best method? As with everything in life there doesn’t exists the optimal knowledge sharing method. However we can set up a set of criteria which can help us to select the optimal way and improve tools.
Criteria

  • Credibilty#
Naturally credibility depends on the author but user must be in the position to decide how credible a source is.

  • Creatibility
How easy is to publish (i.e. create knowledge).

  • Accessibility
How easy is to access knowledge.

  • Motivation
There must be a system which motivates authors to share they knowledge.

  • Comprehensibility
Measurement
Conclusion
Process
The online knowledge production process looks like this.
If we analyze the process as we would a traditional production process then we find an imbalance.  Almost all new initiatives like the semantic web or new search technologies come from the distributor’s side and most investment comes from there.  However the new initiatives put the burden on the knowledge production (lets call them publishers). They have to define microdata, semantic information, links, taxonomies, etc. We also would like that they fill in databases, write code, construct tables, macros…
I draw two consequences from this facts; the first is that publishers get better and more powerful tools, the second that publisher should have stronger influence on the technology and content ownership.
Problem structure
There are the following problems with online knowledge share.












Cooperation
There are a growing number of websites and that is a good thing. The failure of socialism has shown that it’s not possible to have a single, super rationalistic structure even for a single domain of knowledge, industry or market.
However from a consumer perspective an easy navigation is needed between websites, applications , documents…  That requires that these resources of knowledge speak “the same language” they can be interlinked.  This is the promise of the semantic web, it connects information on the meta level. However it’s not possible to have a single set of meta-data, there always will be an area not covered or a perspective not regarded. The only thing we can expect that these meta-data will have some common touch points very much like the HTML documents can link to each other.


Proposal


In my view the ultimate publication process will look like this.


  • It starts with the consumer; authors, editors and alike will permanently check their consumers for their information needs. Naturally it’s done today as well, but I expect stronger tools than before.
  • The basis for publication is text as today.
  • A refined authoring tool defines the domain of the document (if the author hasn’t), looks for similar documents, references, connections…   This feature is build into the text editor; the software runs in the background and runs permanently.
  • Software chooses an appropriate taxonomy and identifies entities in the text. User can modify these selections (and also extend the local taxonomy).
  • The tool identifies places, data and maps it into a database. Again the user can modify data or enter new data. The software can only work with predefined database schema.
  • Publisher also can use and define software tools in her text (macros, decision tables…). These are also stored in a database.

The question is who pays for the content.  Again I have the problems that revenue is moving away from authors which may lead for a decrease of content quality and in the amount of good content.  Now publishers can either choose an Appstore model, rely on advertising or subscriptions.  In may point there should be a stronger specialization where publisher produce the content and distributors pay for it (much like an average grocery store cooperates with it partners). This is a well proven model of the non digital world.

No comments:

Powered By Blogger