Rich Content Poor Content

R

I am a data scientist, and as data scientists our job is to find information in data to help businesses take better decisions. This means finding patterns using a special class of algorithms called Machine Learning. We also create intelligent applications with more decision making power by understanding patterns in this data.

To create such intelligent systems, we quickly have to become demanding when it comes to data — it should have the structure suited for mining information. It should be possible to transform(vectorize) it into a form where machine learning algorithms could be applied.

But most importantly, there should be plenty of it. Should be possible. Thankfully, humanity generates ample amount of digital data everyday. We write, talk, create and capture moments in images and videos. We leave our digital footprints everywhere. Data is usually generated in two forms- Digital Footprints and Content. Digital footprints are data points stored by any applications about you as a user. For e.g., Amazon is collecting your click patterns on their website to know your preferences. But, I want to discuss data in from of content here.

If we look at how we collect and store content, it’s governed by two major factors:

1. Infrastructure

Operating systems stores data in form of files. Taking the example of an audio — Operating systems (OS) stores them in files with mp3, wav, and many other types of codecs( a technical term for encoder-decoder ). OS needs encoding to store the sound information in files and later needs a way to decode it to generate sound. Encoding is a bridge between sound card and the OS and decoding is the bridge between OS and the human.

2. Consumer Behavior

What people want to do with the data also determines how it is stored. People wanted editable documents so we got .doc and .txt files. There were certain documents that did not need editing — think legal docs — so we got pdfs. Even early stage html, which was primarily created to put text data on web pages. For media, we have countless audio & video formats. Lossless encodings, small sized compressed formats, HD formats etc.

I believe we have missed or ignored another very important factor over the years — Business Intelligence. Shouldn’t the data be stored in a way that can get maximum business intelligence from it?

3. Intelligence — The third missing factor

We have largely ignored Intelligence as a factor of data storage due to lack of AI technologies. With no AI technologies in sight, storing data became the end goal. But now things are changing — AI technologies are improving exponentially meaning storing data is just the first step. Data is the digital fuel that powers intelligence to benefit both businesses and their end customers.

Text data is comparatively easy to manage and get some intelligence from. It is searchable, indexable and can be understood by machine learning algorithms — but what about media? They are still the big, fat files that can’t be easily searched or indexed and machine learning algorithms find them notoriously hard to understand. These media files are optimized for long form entertainment but that is not really the only purpose of that content?

Content is not just entertainment

In media you have voice information (both in audio and video) which is not just a source of entertainment, but a source information which is never mined for business intelligence. This content is just sitting there, dark & inaccessible inside big fat media files.

Voice content living in interviews, podcasts, conference videos, customer calls, meetings are valuable for business. Such content is under-utilized or un-utilized and we are all at a big loss as a result.

To get intelligence from this information, we need to convert this data to a form where it is understood by ML algorithms. Not every data we store has this property. For example audio files — mp3, wav, FLAC and various other formats, are they suited for mining information? No, they are well suited for consumption. A dumb media.

Currently, the only way to get some intelligence from this media to mine information is to either hire bunch of data scientists, use cloud ML services or hire services that tag the data manually.

But what if we flip this? — The infrastructure itself becomes intelligent. As a data scientist, I am exploring alternatives here. Can we have some business first data formats?

A better term would be — “intelligence first” content — a kind of rich media.

If we want mass adoption of AI technologies and our systems to perform intelligently, we should push intelligence one level below to infrastructure.

This will also help in fighting one of the biggest challenges of current AI economy — Centralization.

AI Centralization

Content in the form of text, images, videos, audios contains huge amount of information and converting this data to a form which can be analyzed is hard and expensive. We are seeing some success with images by reading their pixel values and using ConvNets.

We are also seeing some success with machine understanding of natural languages, english in particular. Advancement in speech-to-text conversion is making it possible to find & organize information in audio & videos.

But only a few organizations have capabilities to mine information in media content. It is very very expensive. Hence we are seeing chronic centralization of AI power.

Google, Amazon, Microsoft are using their infrastructure, money and access to acquire huge amount of data to build intelligence. Later, they provide it in their cloud offerings for others to build intelligent applications. This is good for them but not for the rest of us.

Bringing intelligence of AI technologies to the infrastructure level can break this centralization of power.

Our entire team at Spext, envision an internet where internet components are themselves smart and businesses can focus on use cases and distributions. The biggest component we can see is, data itself.

New Applications

Intelligent data formats will not just help businesses, they will also tap into new user behaviors, creating new markets. The amazing adoption rate of Alexa is exciting. Is your website ready for interaction with Alexa? Is your media smart enough to interact with bots?

When I discussed these ideas with one of my friends, he jokingly described intelligent media as socialists data formats. A complete opposites of AI capitalism by big bros. Bringing intelligence to the infrastructure level means equal sharing of the cost as well.

In this coming series I will discuss, how we at Spext are looking at it. We are set to innovate in the space of smart media, and firm believer of this idea. For now, we see it the only way to unlock value from media content at scale.

The time for intelligent infrastructure is here and this is Day 0.

Co-Founder and CTO of Spext. Wanders in deep thoughts of science, spirituality and human nature. Often goes to trek Himalayan trails.

About the author

Ashutosh Trivedi

Co-Founder and CTO of Spext. Wanders in deep thoughts of science, spirituality and human nature. Often goes to trek Himalayan trails.

Add comment

Leave a Reply