On what resources do management strategists need to focus, if they want to leverage Artificial Intelligence and create Competitive Advantage? To provide an answer from business academia, a framework for leveraging Artificial Intelligence in a business context is presented in this 5-part article series. In Part 5, the last of our article series, we discuss the strategic relevance of AI knowledge for applying AI in a business context.

AI Knowledge is born within the organization

The fourth and last element needed for applying AI in a business context, is AI-enabling knowledge. One might argue that this is mainly incorporated within the skilled labor. However, from a knowledge-based view, this argument is only true for the types of knowledge on the individual level: conscious and automatic knowledge. On a social or organizational level, objectified and collective knowledge is also relevant for AI. For example, suitable organizational structures and processes and a data-driven decision-making culture is essential for transforming the insights from applying AI into business value. Nevertheless, there is one type of objectified knowledge, which is also highly important for applying AI as a team or individual: knowledge about AI technology and machine learning (ML) algorithms. It is classified as explicit, organizational knowledge, because nowadays, no Data Scientist knows all the important techniques by heart, not to speak of an efficient implementation of those. Most Data Scientists do not need to code them from scratch anyway, since highly efficient implementations are often already available open source. It is enough to understand the underlying mechanisms and to know when which specific technique might work.

Ten AI Technologies

Gerbert et al. summarizes the whole universe of AI technologies in 10 areas:

  1. Machine vision,
  2. Speech recognition,
  3. Natural-language processing,
  4. Information processing,
  5. Learning from data,
  6. Planning and exploring agents,
  7. Image generation,
  8. Speech generation,
  9. Handling and control
  10. Navigating and movement.

For all of these AI technologies, the most important underlying method is some form of ML. They are the key techniques that allow an AI to recognize patterns in large datasets and to learn knowledge by generalization. Especially for data mining or big data analytics purposes, ML algorithms are essential tools for learning from data directly. In this context ML can be used for descriptive, predictive or prescriptive types of analytics, respectively meaning describing the past or the present, predicting the future, and simulating the future on certain actions taken, as Sivarajah states.

Three Machine Learning Families

The three most important areas of ML techniques are supervised, unsupervised, and reinforcement learning.

  • Supervised learning is applied for classification or regression tasks, following the principle of learning by example.
  • Unsupervised learning deals with finding new structures in datasets without given examples, for example through clustering or dimensionality reduction techniques.
  • Finally, reinforcement learning deals with intelligent agents interacting with a dynamic environment, for example in autonomous driving or board games.

Each of these families of ML techniques contains a vast number of different specific algorithms and going through them would go beyond the scope of this article. The most important ones for data mining are summarized by Wu et al. (2008). For further readings on ML, there are many extensive textbooks, for example Bishop (2006).

Extracting Data Structures as the Predominant Trend in Machine Learning

For most of the AI technologies, the essential ML technique is deep learning. Within deep learning (DL), some specific types of network “architectures” proved to be especially powerful for certain tasks. In the case of images, for example, Convolutional Neural Networks (CNN) surpassed other methods tremendously. For sequential data like speech or language on the other hand, the favored type of architecture is called Recurrent Neural Network (RNN). The power of deep learning lies within the ability to automatically abstract complex data structures. However, like Artificial Neural Networks (ANNs) in general, they are computationally extremely expensive. But thanks to the utilization of graphics processing units (GPUs), Goodfellow et al. evaluated a 10-fold increase in computing speed could be achieved, making ANNs and especially DL computationally feasible. Next to the lack of interpretability often mentioned as the crucial stumbling block, another main criticism about DL and ANNs in general is that they have many “hyperparameters” to set beforehand, thus experimenting with different architectures and configurations is difficult and time-consuming, arguably even an art.

Open source being not easy to use but bringing USP

This is why Data Scientists need to remember that their main goal is to leverage business value, and they need to balance the complexity of their models with their practicability. If done right, AI technology and ML algorithms can indeed be a source of competitive advantages, despite the fact that they are mostly open source. Müller-Lietzkow argues that even though open source knowledge is publicly available, being basically a “free” resource does not mean automatically it is “easy to use” and due to the rarity of skilled labor as a complement, even free resources can be seen as rare and represent sources of competitive advantages.

Show Sources
Unser Newsletter

Abonniere und verpasse keine Neuigkeiten

Vielen Dank! Jetzt bleibst du immer auf dem neusten Stand!
Ups! Das Formular konnte nicht abgesendet werden!