Tacit knowledge is non-codifiable, and its acquisition and transfer require hands-on practice.

Knowledge Management Strategy

Mohammad Nazim, Bhaskar Mukherjee, in Knowledge Management in Libraries, 2016

Knowledge Capture and Codification

According to Gandhi (2004), knowledge capture “involves the key inputs and outputs of knowledge. Key inputs may include specific data and information, verbal or written communications, and other shared explicit and tacit knowledge such as best practices. Key outputs may be [in the form of] internal documents, reports, research papers, procedures, internal benchmarks, and best practices” (p. 373). Knowledge capture is important for the success and development of a knowledge-based organization. Much of the knowledge in an organization resides in the heads of the people, and if it is not captured and stored, it is more likely to be lost when an employee leaves the organization. Therefore it is essential to identify the expertise and the skills of staff and capture it to avoid a collective loss of organizational memory.

Libraries need to develop systems to identify people’s expertise so that it may be captured, shared, and reused in the future. Formal processes of capturing knowledge include collating internal profiles of librarians and also standardizing routine information-update reports. Additionally, libraries can capture the most commonly received enquiries at the reference desk and place them within easy reach to better serve users in the shortest time possible. It is important to create databases of frequently asked questions to enable librarians to not only provide an in-depth customized reference service but also to become knowledgeable about handling different enquiries (Maponya, 2004).

Since tacit knowledge is intuitive and practice-based, it cannot easily be passed on to others. To make the best use of tacit knowledge, it must be codified into an explicit form. Once tacit knowledge is codified and converted to explicit knowledge, it may easily be stored, organized, combined, accessed, shared, and manipulated in different contexts. The codification of knowledge provides several benefits to libraries:

Codification enables libraries to secure knowledge. A library is in less danger of losing its intellectual assets, even when its employees retire or leave the organization.

Codification enables fast access and retrieval of knowledge.

Codification facilitates sharing, reuse, reflection, and ongoing learning.

The processes of codification and representation of knowledge for access and reuse are not new to Library & Information Science (LIS) professionals, as they are involved in many stages of the knowledge processing cycle. Gandhi (2004) outlined the following steps in the codification and representation of knowledge:

1.

Identifying, acquiring, or extracting valuable knowledge from documents, discussions, or interviews, usually accomplished with the help of subject matter experts.

2.

Refining, writing up, and editing “raw knowledge” (such as project files, presentations, e-mail messages), and turning it into “processed knowledge” (such as lessons learned, best practices, case studies).

3.

Organizing the processed knowledge and making it accessible by adding index terms, subject headings, cross-reference links, and metadata.

4.

Packaging, publishing, and disseminating knowledge through a variety of channels, including intranet web pages, CD-ROMs, subject-oriented pathfinders, and “knowledge portals” that are focused on particular business needs or issues.

5.

Designing and managing the overall information architecture consisting of a set of well-defined standards and schemes for organizing, classifying, publishing, and navigating the organization's intellectual content.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081005644000053

What are virtual walls to flow of knowledge in teamwork discussions?

Vichita Vathanophas, Suphong Chirawattanakij, in Technology and Knowledge Flow, 2011

From tacit to explicit

Personalised tacit knowledge can be obviously disseminated into a written document via the ‘Externalisation’ form. Therefore, externalisation involves writing. An example of this form is a business procedure explained from the experience of a practitioner. Externalisation is not an easy process. Its challenge is in how a knowledge owner can effectively convey their possessed knowledge into a written document. Some individuals disregard their knowledge and leave knowledge articulation to others, while some realise the skill, craft and experience they own but cannot make them into visible forms.

In technological terms, externalisation can play its role via online social network tools. Experts can share their knowledge in general or specific network groups. The groups can range from an organisational shared space to a worldwide network area. Moreover, an organisation can maintain knowledge lost from employees’ resignation by a recording of their knowledge in the organisation’s knowledge base, from which existing staffs can learn later.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843346463500046

Tacit Knowledge and Engineering Design

Paul Nightingale, in Philosophy of Technology and Engineering Sciences, 2009

Publisher Summary

Understanding what tacit knowledge is, and particularly how the concept is used, is important for philosophers of technology because it is now a central concept in policy discussions related to engineering. It is used to explain why knowledge production is localized, cumulative and path-dependent, and therefore why designers, design teams, firms and regions differ in their technological performance. Given the impact of public policy related to the “knowledge economy,” there is a legitimate role for philosophers of technology to investigate the foundations of these ideas in more detail. This is particularly important because the terminology of tacit knowledge is applied very widely, but is rarely explicitly explained. Just what tacit knowledge is, and how it is valuable during the development of technology, is often itself a “tacit” concept. This chapter defines engineering as the art of organizing and negotiating the design, production, operation and decommissioning of artefacts, devices, systems and processes that fulfill useful functions by transforming the world to solve recognized problems. This hopefully highlights the practical, creative nature of engineering, with a clear connection to judgments and choices about solutions that achieve a balance between potentially conflicting outcomes in terms of their aesthetic, economic, environmental, technical and other criteria.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444516671500173

Tacit Knowledge, Psychology of

A.S. Reber, in International Encyclopedia of the Social & Behavioral Sciences, 2001

The term tacit knowledge was first brought into the social scientist's lexicon by the philosopher Michael Polanyi. Polanyi, who wrote extensively on the role of consciousness in creativity and in the process of doing science, used the term in a manner that extended its standard, nontechnical connotations. Rather than simply having it refer to knowledge that is implied, Polanyi used it to refer to knowledge that was personal, private, and, importantly, knowledge that was not necessarily available for conscious introspection. This last meaning is the one that has had dramatic impact in contemporary psychology—although in many writings tacit has been replaced by the synonymous term implicit. However, no matter which term is used, the notion has become of considerable importance since it is now understood that a good deal of knowledge, as Polanyi suggested, is acquired and held largely independent of awareness. Moreover, the underlying neurological mechanisms that are responsible for acquisition and retention of implicit or tacit knowledge appear to be remarkably robust and to function virtually normally in individuals with several psychological and neurological disorders than compromise conscious, top-down processes.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767014923

Principles of knowledge management

Tom Young, Nick Milton, in Knowledge Management for Sales and Marketing, 2011

Knowledge suppliers and users

Prusak’s definition presented in the previous section implies the existence of suppliers of knowledge (‘individuals’) and users of knowledge (‘others’), people in whose minds the knowledge is buried and people and teams who need access to that knowledge.

Knowledge is created through experience and through the reflection on experience in order to derive guidelines, rules, theories, heuristics and doctrines. Knowledge may be created by individuals, through reflecting on their own experience, or it may be created by teams reflecting on team experience. It may also be created by experts or communities of practice reflecting on the experience of many individuals and teams across an organisation. The individuals, teams and communities who do this reflecting can be considered as ‘knowledge suppliers’.

In business activity, knowledge is applied by individuals and teams. They can apply their own personal knowledge and experience, or they can look elsewhere for knowledge – to learn before they start, by seeking the knowledge of others. The more knowledgeable they are at the start of the activity or project, the more likely they are to avoid mistakes, repeat good practice, and avoid risk. These people are ‘knowledge users’.

We have introduced the idea of tacit knowledge and explicit knowledge. The knowledge can be transferred from the supplier to the user tacitly, through dialogue, or explicitly, through codifying the knowledge. Figure 1.2 shows these two approaches by looking at the two places where knowledge can be stored: in people’s heads or in codified form in some sort of ‘knowledge bank’ (Figure 1.2 is a redrafting of the SECI model of Nonaka and Takeuchi). These two stores can be connected in four ways:

Tacit knowledge is non-codifiable, and its acquisition and transfer require hands-on practice.

Figure 1.2. Knowledge flow from supplier to user

direct transfer of knowledge from person to person (communication);

transfer of knowledge from people to the ‘knowledge bank’ (knowledge capture);

organisation of knowledge within the knowledge bank (organisation);

transfer of knowledge from the ‘knowledge bank’ back to people (access and retrieval).

Knowledge can therefore flow from supplier to user (from person to person, or team to team) in two ways.

The most direct (the upper left arrow on Figure 1.2) is through direct communication and dialogue. Face-to-face dialogue, or dialogue via an online communication system, is an extremely effective means of knowledge transfer. This method allows vast amounts of detailed knowledge to be transferred, and the context for that knowledge to be explored. It allows direct coaching, observation and demonstration. However, it is very localised. The transfer takes place in one place at a time, involving only the people in the conversation. For all its effectiveness as a transfer method, it is not efficient. For direct communication and dialogue to be the only knowledge transfer mechanism within an organisation would require a high level of travel and discussion, and may only be practical in a small team working from a single office where travelling is not an issue (for example a regional sales team that meets on a regular basis). This may be the only practical approach to the transfer of uncodifiable knowledge – that knowledge that cannot be written down (which Polyani would call ‘tacit’). However, it should not be the only mechanism of knowledge transfer, nor should knowledge be stored only as tacit knowledge in people’s heads. Using people’s memories as the primary place for storing knowledge is a very risky strategy. Memories are unreliable, people forget, misremember or post-rationalise. People leave the company, retire or join the competition. For example, what is the staff turnover in your team? Your division? Your company? How much knowledge is leaving your organisation in the heads of the departing people? There needs to be a more secure storage mechanism for crucial knowledge and a more efficient means of transfer than just dialogue.

The less direct flow of knowledge (the larger, lower right arrow on Figure 1.2) is through codification and capture of the knowledge, storage in some sort of ‘knowledge bank’ and retrieval of the knowledge when needed. The transfer is lower bandwidth than direct communication, as it is difficult to write down more than a fragment of what you know. No dialogue is possible and demonstrations are restricted to recorded demonstrations, e.g. using video files. Transfer of knowledge by this means is not very effective. However, the knowledge need only be captured once to be accessed and reused hundreds of times, so it is an efficient method of transferring knowledge widely. The knowledge is secure against memory loss or loss of personnel. This approach is ideal for codifiable knowledge with a wide user base. For example, the widespread transfer of basic cooking knowledge is best done through publishing cookery books. It is also ideal for knowledge that is used intermittently, such as knowledge of office moves or knowledge of major acquisitions. These events may not happen again for a few years, by which time the individuals involved will have forgotten the details of what happened, if it is not captured and stored.

These two approaches to knowledge transfer are sometimes called the connect approach (the smaller arrow), where knowledge is transferred by connecting people, and the collect approach (the larger arrow), where knowledge is transferred by collecting, storing, organising and retrieving it. Each method has advantages and disadvantages, as summarised in Table 1.1. Effective knowledge management strategies need to address both of these methods of knowledge transfer. Each has its place; each complements the other.

Table 1.1. The connect and collect approaches to knowledge transfer

ApproachConnectCollect
Advantages

very effective

allows transfer of non-codifiable knowledge

allows socialisation

allows the knowledge user to gauge how much they trust the supplier

easy and cheap

allows systematic capture

creates a secure store for knowledge

very efficient; knowledge can be captured once and accessed many times

Disadvantages

risky; human memory is an unreliable knowledge store

inefficient; people can only be in one place at a time

people often don’t realise what they know until it’s captured

some knowledge cannot be effectively captured and codified

capturing requires skill and resource

captured knowledge can become impersonal

Types of knowledge suitable for this form of transfer

ephemeral, rapidly changing knowledge, which would be out of date as soon as it’s written

knowledge of continual operations, where there is a large, constant community

knowledge needed by only a few

stable mature knowledge

knowledge of intermittent or rare events

high-value knowledge

knowledge with a large user base

Comments

One traditional approach to knowledge management is to leave knowledge in the heads of experts. This is a risky and inefficient strategy.

A strategy based only on capture will miss out on the socialisation that is needed for culture change, and may fail to address some of the less codifiable knowledge.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843346043500012

Knowledge (Explicit and Implicit): Philosophical Aspects

M. Davies, in International Encyclopedia of the Social & Behavioral Sciences, 2001

5.1 Quine's Challenge

Quine challenges Chomsky's introduction of the notion of tacit knowledge by making use of the distinction between behavior that conforms to a rule and behavior that is guided by a rule. A subject can behave in a way that conforms to a rule without using the rule to guide his behavior for, as Quine (1972) uses the notion of guidance: ‘(T)he behavior is not guided by the rule unless the behaver knows the rule and can state it.’ Guidance requires explicit knowledge.

Chomsky's tacit knowledge is supposed to require less than explicit knowledge; but it cannot be equated with mere conformity. In fact, conformity to rules is neither necessary nor sufficient for tacit knowledge of those rules. It is not necessary, since the presence of tacit knowledge of rules does not guarantee perfect deployment of that knowledge in actual performance. It is not sufficient, since a tacit knowledge claim is not offered as a summary description of behavior but as a putative explanation of behavior. There will always be alternative sets of rules that require just the same behavior for conformity; but it is part of the idea of tacit knowledge that a speaker's actual behavior might be correctly explained in terms of tacit knowledge of one set of rules rather than the alternatives.

It is at this point that Quine (1972) poses his challenge. He insists that, if an attribution of tacit knowledge is an empirical claim that goes beyond a summary of conforming behavior, then it should be possible to indicate what kinds of evidence would count in favor of or against that empirical claim. He also insists that this evidence should involve the subject's behavior. To this latter point, it is reasonable to reply that there can be no a priori limit on the kinds of evidence that might be relevant to an empirical claim. So it is not legitimate to restrict evidence to the behavior of the very subject to whom the attribution of tacit knowledge is being made. But the more general point about evidence is a fair one.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767009955

Scientific Research and Communication

Ronald Rousseau, ... Raf Guns, in Becoming Metric-Wise, 2018

2.1 Knowledge and Scientific Research

2.1.1 Tacit Versus Explicit Knowledge

According to Polanyi (1966) tacit knowledge is nonverbalized, intuitive, and unarticulated knowledge. It is knowledge that resides in a human brain and that cannot easily be codified or captured. Nevertheless it is one of the aims of the field of artificial intelligence, and in particular of expert systems, to include exactly this kind of knowledge. Explicit knowledge is that kind of knowledge that can be articulated in a formal language and transmitted among individuals. It is the kind of knowledge found in all types of scientific publications.

2.1.2 Scientific Research

This Subsection is largely based on information from Wikipedia: http://en.wikipedia.org/wiki/Science. Persons who spend their professional time doing science are called scientists or researchers. Note that here and further on in this work the word “science” refers not only to the natural and biomedical sciences, but also to applied science (engineering), the social sciences, and the humanities. Outsiders may ask: Why do research, why publish research results? Is it for the benefit of humanity, out of curiosity, to increase one’s social standing, to have an attractive and respected occupation, or in pursuit of recognition? We do not try to answer these questions as the answers are highly personal. Some may even do research in the secret hope of becoming famous like Einstein. However, getting rich is rarely a motivation for doing academic research.

Whatever one’s field of inquiry one always has to deal with the “problem choice” the issue of choosing “good” research problems among a large amount of possibly interesting ones. Which criteria should one use to solve this question? Probably there is no general answer and being able to choose an interesting and soluble problem is just one of the characteristics that differentiates great scientists from good scientists.

Science is commonly viewed as an activity that leads to the accumulation of knowledge. Its main aim is to improve the knowledge of humanity by using scientific methods. The scientific method seeks to explain the events of nature in a logical and in most cases reproducible way (lab experiments must be reproducible but, e.g., the Big Bang is not). The use of such methods distinguishes a scientific approach from, for instance, a religious one, as supernatural explanations are never accepted in science.

Science can be described as a systematic endeavor to build and organize knowledge. Yet, performing scientific investigations differs in an essential way from following a recipe. It requires intelligence, imagination, and creativity. Research implies an inquiry process, including a problem statement, consideration of the significance of the problem, statement of the study objectives, research design, a clear and precise methodology, information about the reliability and validity of the results, appropriate data analysis, as well as a clear and logical presentation (Hernon & Schwartz, 2002).

Scientific investigations can be subdivided into different types. One distinction is between formal and empirical sciences. Formal sciences are not based on observations, but on logic and a set of axioms from which other statements (theorems) are deduced. The most important formal sciences are logic and mathematics, but theoretical computer science and formal linguistics are formal sciences as well. Most sciences are empirical sciences, including natural sciences, social sciences, and the humanities. While natural sciences study the material world and natural phenomena, the social sciences and the humanities investigate human behavior and societies. Being a scientist in the natural sciences usually leads to formulating testable explanations and predictions about the universe, followed by performing the actual experiments or trying to observe the expected phenomena (see further on when we discuss the work of Popper). Yet, there are exceptions such as large parts of cosmology or elementary particle physics (e.g., string theory) (Woit, 2006) for which there do not (yet) exist experiments. One may say that such theories belong to a region that is part of the formal sciences, but are geared towards becoming empirical theories.

Disciplines that use science, like engineering and medicine, are referred to as applied sciences. Different engineering fields apply physics and chemistry (and possibly other fields), while medicine applies biology. Some applied fields use basic knowledge from different fields, including the formal sciences, such as genetic epidemiology which uses both biological and statistical methods, or synthetic biology which applies, among others, biotechnology and computer engineering.

Another way of describing science is through Stokes’ classification which involves Pasteur’s quadrant (Stokes, 1997). Pasteur’s quadrant is a label given to a class of scientific research methods that seek fundamental understanding of scientific problems, and, at the same time, seek to be eventually beneficial to society. Louis Pasteur’s research is thought to exemplify this type of method, which bridges the gap between “basic” and “applied” research. The term Pasteur’s quadrant was introduced by Donald Stokes in his book with the same title (Stokes, 1997). As shown in Table 2.1, scientific research can be classified according to whether it advances human knowledge by seeking a fundamental understanding of nature (basic research), or whether it is primarily motivated by the need to solve immediate problems (applied research).

Table 2.1. Pasteur’s quadrant

Considerations of use?
NoYes
Quest for fundamental understanding Yes Pure basic research Bohr Use-inspired basic research Pasteur
No Pure applied research
Edison

The result is three distinct classes of research. Pure basic research (exemplified by the work of the atomic physicist Niels Bohr), pure applied research (exemplified by the work of the inventor Thomas Edison), and use-inspired basic research (exemplified by the work of Louis Pasteur). Actions that involve neither a search for fundamental understanding nor any considerations of use, can hardly be called “research”—hence the empty fourth cell.

Project leaders with a mindset belonging to the Pasteur quadrant are said to be the natural leaders of successful interdisciplinary work (Van Rijnsoever & Hessels, 2011).

As we will occasionally refer to the nature of the scientific method we include a short description of the ideas of Karl Popper and Thomas Kuhn. According to Popper (1959) a scientific theory in the natural sciences must be empirical, which means that it is falsifiable. More concretely, a scientific theory leads to predictions. Falsification occurs when such a prediction (i.e., a logical consequence of the theory) is disproved either through observation of natural phenomena, or through experimentation i.e., trying to simulate natural events under controlled conditions, as appropriate to the discipline. In the observational sciences, such as astronomy or geology, a predicted observation might take the place of a controlled experiment. Popper stressed that if one singular conclusion of a theory is falsified the whole theory is falsified and must be discarded, or at least modified. If the hypothesis survived repeated testing, it may become adopted into the framework of a scientific theory. Yet, he writes:

A positive decision can only temporarily support the theory, for subsequent negative decisions may always overthrow it. So long as a theory withstands detailed and severe tests and it is not superseded by another theory in the course of scientific progress, we may say that is has “proved its mettle” or that it is “corroborated” by past experience.

Popper, 1959.

In addition to testing hypotheses, scientists may also generate a model based on observed phenomena. This is an attempt to describe or depict a phenomenon in terms of a logical, physical or mathematical representation and to generate new hypotheses that can be tested. While performing experiments to test hypotheses, scientists may have a preference for one outcome over another (called a confirmation bias), and so it is important to ensure that science as a whole can eliminate this bias. After the results of an experiment are announced or published, it is normal practice for independent researchers to double-check how the research was performed, and to follow up by performing similar experiments i.e., to replicate the original experiments. Taken in its entirety, the scientific method allows for highly creative problem solving (Gattei, 2009).

Another important aspect of Popper’s philosophy, is his theory of the three worlds or universes:

First, the world of physical objects or of physical states, secondly, the world of states of consciousness, or of mental states, or perhaps of behavioural dispositions to act, and thirdly, the world of objective contents of thought, especially of scientific or poetic thoughts and works of art.

Popper, 1972.

Clearly the information sciences reflect on objects belonging to World 3. More information on the life and ideas of Popper can be found in Stokes, 1998.

When it comes to the nature of the scientific method, we also want to mention Thomas Kuhn’s work (Kuhn, 1962) and his use of the term paradigm. A paradigm can be described as “a typical example or pattern of something” (http://www.merriam-webster.com/). Yet, when scientists use the word paradigm they mostly have in mind the set of practices that define a scientific discipline at a particular period of time, as proposed by Kuhn. More precisely in The Structure of Scientific Revolutions (Kuhn, 1962) he defines scientific paradigms as: “universally recognized scientific achievements that, for a time, provide model problems and solutions for a community of practitioners.” Kuhn saw the sciences as going through alternating periods of normal science, when an existing model of reality dominates, and revolution, when the model of reality itself undergoes a sudden drastic change. Paradigms have two aspects. Firstly, within normal science, the term refers to the set of exemplary experiments that are likely to be copied or emulated. (https://en.wikipedia.org/wiki/Paradigm). The choice of exemplars is a specific way of viewing reality: this view and the status of “exemplar” are mutually reinforcing. Secondly, underpinning this set of exemplars are shared preconceptions, made prior to (and conditioning) the collection of scientific evidence. In contrast to Popper, results in conflict with the prevailing paradigm (anomalies), are for Kuhn considered to be due to errors on the part of the researcher. It is only when conflicting evidence increases, that a crisis point is reached where a new consensus view is arrived at, generating a paradigm shift.

Popper’s ideas can be said to be prescriptive while Kuhn’s are more descriptive. Both originated from reflections on the natural sciences. For this reason we mention another model, originating from the social sciences, proposed by Van der Veer Martens and Goodrum (2006). This model has three types of factors: empirical factors, socio-cognitive factors and theoretical factors. The first and the last type have two aspects each so that there are in total five factors. These are:

applicab ility–constructivity–acce ssibility–connectivity−ge nerativity

Concretely, these factors are related to the following questions:

Applicability (the first empirical factor)

Does this theory apply to a wide variety of phenomena?

How salient are the phenomena? Or stated otherwise: How important are these phenomena?

Constructivity (the second empirical factor)

Is this theory constructed so as to facilitate its testing or replication?

Accessibility (the only socio-cognitive factor)

How easy is this theory to understand and utilize?

How important is it to the discipline as a whole?

What types of publication channel have carried it?

How else has this theory been communicated?

Connectivity (the first theoretical factor)

How does this theory fit into existing theoretical frameworks?

How closely is it tied to previous theories?

Generativity (the second theoretical factor)

Can this theory generate a new theoretical framework or new uses of earlier theories?

Although presented as a model for theories in the social sciences, we think that its applicability goes beyond the social sciences and hence can be applied to many other fields of investigation.

We note that the term science is also used to denote reliable and teachable knowledge about a topic, as in library and information science, computer science or public health science.

Because of increasing pressures and increasing needs for funds, science is, unfortunately, becoming more a race of all against all, instead of a joint human endeavor for the benefit of humanity. Yet, or maybe because of this, scientists form collaborating teams (often international groups) leading to an increase of multiauthored publications. Notwithstanding this caveat, the basic purpose of scientific research is still to benefit the community at large by trying to know the unknown, explore the unexplored and create awareness about new research findings.

2.1.3 Citizen Science

A relative newcomer in the realm of science is the citizen scientist and the terms citizen science or crowd science. This term refers to amateurs or networks of volunteers who participate in a scientific project, usually by collecting or analyzing data. It is a form of public participation in the scientific enterprise. It is said that such participation contributes positively to science-society-policy interactions and is a form of democratization of science. Newer technologies, often computer related, have increased the options for citizen science.

2.1.4 Open Science

Following the footsteps of movements like open source, open science is a movement that aims to make research and research output more accessible. By making data, software and publications openly accessible, researchers can increase the transparency and replicability of their research, both for colleagues and a wider audience. As such, open science and citizen science are related in bringing science to the general public.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081024744000029

Knowledge flows and graphic knowledge representations

Giorgio Olimpo, in Technology and Knowledge Flow, 2011

Mean for giving structure to tacit knowledge

In the context of an organisation, tacit knowledge should become explicit and objectivised. It is worth mentioning that representation tools should not be intended only as instruments to give shape to the final knowledge representation, i.e. the output of the externalisation process. They also have a constructive role within the process of externalisation where they may assume an actual maieutic value: when the nature of a representation language (determined by its internal constraints) is well tuned with the knowledge to be represented, then identifying and connecting concepts, making abstractions and reasoning are all facilitated and enhanced.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843346463500058

Communities in sales and marketing

Tom Young, Nick Milton, in Knowledge Management for Sales and Marketing, 2011

A way to ask questions and give answers

The primary way for community members to access the tacit knowledge of the organisation is through asking questions. Any community member facing a problem where they lack complete knowledge should have a means of asking the community for help and of receiving answers. In a co-located community, this can be done in regular face-to-face meetings. Communities of practice in dispersed or multinational businesses cannot meet regularly face to face. They need some virtual means of raising questions and receiving answers.

There are many web-based or e-mail-based discussion tools or question and answer forum tools that allow just this facility, and these are proven and popular tools for sharing knowledge within communities. Buckman Laboratories, for example, operate several global communities of practice, each with its own online Q&A forum, so that customer-facing sales agents anywhere in the world can access lessons from their peers by sending an e-mail asking for lessons and advice. An e-mail will be forwarded to all members of the community and anybody who can share their learning will reply. Replies are collated as a ‘threaded discussion’ on a community forum and community members can read this discussion and their own comments and learn themselves. Social networking software such as Facebook, Twitter, Yammer and SalesForce Chatter also allows community members to ask each other for help and advice.

This sort of online interaction has historically worked better for marketing than for sales, as marketers are more likely to be sitting behind a desk and in front of a screen. However, the increase in power and connectivity of handheld tools and smartphones now means that sales people can join in the online conversation from anywhere they happen to be. This is discussed further in Chapter 6.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843346043500048

Reasoning with graphs

Jamie O’Brien, in Shaping Knowledge, 2014

Drawing relationships

Humans as creative animals bear a predilection to transform tacit knowledge into explicit knowledge through some means of, what has been termed, ‘graphic act’ (Gibson, 1979: 275–6). Researchers in the practices of drawing have commented how artists create a ‘thinking-space’ through their activities and imaginations, in which remote phenomena are brought to a state of presence (Garner, 2007: 110). Graphs represent the spatial arrangement of things, including their temporal dimension. Both in creativity and in logic, we draw these relationships using lines. Drawing makes use of traces of a hand’s movement over a page, and threads interweave lines to form a surface (Ingold, 2007: 60). Line-making also serves to transmit tacit knowledge and to establish a commonly held explicit knowledge, or a ‘professional vision’, in just about every area of practice.

Relating these kinds of activation through line-making to social space, the sociologist John Urry (2004) has outlined current descriptions of social arrangements as networks of people, places, materials and mobilities, and has observed how current web technologies represent the rapidity of spatial change in social landscapes. These social networks are valued by their ‘connectedness’ or ‘degrees of separation’ that reflect not physical proximity between one person and another, but a kind of symbolic proximity of, for example, ‘friendliness’ or ‘strangeness’. Hence a graph of a social network represents an authored and configured drawing of the people and artefacts that constitute and demarcate its specific social domain.

The edges of a graph are lines that represent connections among nodes such as, for example, the social, spatial or semantic relationships of a community. Graphs allow modellers to reason about these kinds of relationships in a way that is intuitively readable and computationally tractable (Chein et al., 2013). Graphs also allow any kind of agent or entity, and any of their possible relationships, to be handled conceptually. This means that even those properties of a relationship that are ‘passive’ in the real world (such as the fact of someone’s existence) can be rendered as an active component of the graph representation (Sowa, 2009).

Basic graphs represent the network entities and the characteristics of their relationships. More complex graphs also represent multiple layers of these relationships, including those that exist by inference of connections elsewhere in the network. Hence a graph can help to visualize aspects of community relationships that are not immediately apparent. For example, people’s identities can be strongly attached to where they are from and, in places of wide inequalities, they can become isolated in specific areas. Double-edged ‘socio-spatial’ inequalities cannot be seen easily by the broader urban community, leading to a limitation in the range of citizens’ sensitivities to these kinds of inequalities. Indicators of inequalities that are highly salient relate to visible urban-fabric or socio-cultural distinctions (cf. Chokor, 1991; Viega, 2012). Less salient, however, are information-based indicators such as social connectedness and access to economic opportunities (cf. United Nations, 2013: 77; Morsey, 2012).

The current availability of graph technologies, including web-based platforms, has brought about among web media participants a near-simultaneous engagement in building and maintaining these social networks. This social technology ubiquity is perhaps telling of a current step-change in media agility. People can draw or, rather, thread their social relationships using web-media platforms, embroidering elaborate identities with texts, artefacts, nodes and connectors, woven into the surfaces of media, communications and gaming platforms. Socio-spatial identities also are materialized in web-mapping, geographic information and positioning systems. Participation in digital media depends increasingly on touch-sensitive interfaces, serving to restage the gestural line as a means of digital threading. The embroidering of identifications forms as enmeshed surfaces of social domains. These surfaces are distorted and disrupted by certain attractors, they go through periods of instability and transition; they decay, only to regenerate. If social relationships can be viewed as having physics-like properties (as we discussed in Chapter 3), so this graphic interweaving of attractions, transitions and decays is perhaps suggestive of a ‘social physics’ of relativity.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843347514500078

Can be defined as the structures processes and systems that actively develop leverage and transfer knowledge?

Knowledge management can be defined as the structures, processes, and systems that actively develop, leverage, and transfer knowledge.

What did Theodore Levitt argue in his 1983 article The globalization of markets quizlet?

What did Theodore Levitt argue in his 1983 article "The Globalization of Markets"? Centralized research and development guarantees persistent heterogeneity in the solution generated by a company.

Which of the following is true of a localization strategy?

Which of the following is true of a localization strategy? It is effective when pressures for cost reductions are low. Which of the following is a disadvantage of a global product division structure? It results in the suffering of local responsiveness.

Which type of organizational structure is often associated with a transnational strategy?

Global product division structure, which is the opposite of the geographic area structure, supports the transnational strategy.