logo-dark

Welcome to our blog.


Digital Image Processing refers to using computer algorithms to process digital images for improving quality or to extract information. With the spurt of growth that image analytics is witnessing currently, Digital Image Processing is a handy tool for data analysts to have in their repertoire. This article aims at outlining the quintessential techniques used as a part of Digital Image Processing that is quickly gaining traction amongst data analysts.

The smallest unit of an image, commonly referred to as pixels, represent a list of values that corresponds to the colour model of the image. For example each pixel of the RGB(Red,Green,Blue) colour model would specify a list of 3 values corresponding to intensity of the colors Red, Green and Blue present in the pixel.

To get started with, it is important to know of the few commonly used methods of Digital Image Processing. In this article aims at giving you a bird’s eye view of some of the essential methodologies.

Noise filtering – Noise refers to the unwanted or unnecessary information that is present in an image. Usually, we can find a underlying pattern to the extra unwanted information. Commonly occuring noise patterns of images include Gaussian noise (noise that has a normal distribution), salt and pepper noise (a bright pixels in dark region or a dark pixels in bright region), periodic noise(noise caused by electrical or electromechanical interference). Once a pattern is found we can remove the noise by altering the pixel values.

Original Image

Average Blurred Image

Gaussian Blurred

Contrast enhancement – Contrast enhancement brings out the difference in image pixel values and helps in identifying the parts. Multiple ways of contrast enhancement exists in digital image processing in which image-adjust(increases the highest values and decreases the lowest values to predefined values), Histogram equalization(spreading the peaks and lows in image histogram to spread evenly in the image) are few algorithms commonly used.

Actual Image

Histogram Equalization

Image segmentation – Segmentation can be described as a process of partitioning image into groups which helps in identifying patterns, and studying objects in the image. There are many instances wherein segmentation of an image becomes important. For example, separating the image of a human bone into outer and inner layers can help in identifying a fracture location thoroughly.
Commonly used methods in image segmentation include channel separation, thresholding methods and more. Channel separation involves converting an image into separate channel of base colour model. Thresholding methods divide the image into parts by defining a threshold value which can be obtained by studying image histograms or by dynamically focusing on a region of the image and statistically defining a value by looking at the average value in the focused area. Weighted average of pixels or median value of the focused area can be used as well. Thresholding creates binary images with each pixel representing only one of two distinct values, in which all the parts of importance are set as one static value rest as another static value.

Original Image

Channels

Original Image

Thresholded Image

Edge and Contour detection – Edge is a point in image representing sudden and abrupt change in the pixel values which represents occurence of a event like start or end of a object in image. Edges are obtained by finding the local derivative of the distribution of colour model. The existence of noise however, will result in finding irrelevant or wrong points being detected. So we need to employ processing methods like noise removal and blurring of images to remove the unnecessary or wrong gradients from image. Contours are curves obtained by tracing the continuous edge points of the same intensity, through the obtained contours we can extract the information about the region such as dimensions of the region and we can use the obtained region as region of interest for machine learning activities like object classification and/or tracking.

Original Image

Edge detection

Contour Detection

In a world that is now producing information in multiple media channels, aspects like Digital Image Processing take on an increasingly vital role in rendering effective analytics. Getting the foundation of it right by understanding and perfecting the basic techniques involved is thus imperative to anyone attempting to make headway into the world of image analytics.

March 6, 2019

Social Media Analytics

Social media analytics is the process of gathering data from stakeholder conversations on digital media and processing into structured insights leading to more information-driven business decisions and increased customer centrality for brands and businesses.

With social media monitoring, businesses can also look at how many people follow their presence on Facebook and the number of times people interact with their social profile by sharing or liking their posts. More advanced types of social media analysis involve sentiment analytics. This practice involves sophisticated natural-language-processing machine learning algorithms parsing the text in a person’s social media post about a company to understand the meaning behind that person’s statement. These algorithms can create a quantified score of the public’s feelings toward a company based on social media interactions and give reports to management on how well the company interacts with customers.

Dataval’s Social Media Analytics Framework

Architect that we have designed comprises of User Interface where a person can query for an analytics report on a particular topic. Query can be on the basis of some keyword, trending topics or some business specific products. Query from the User Interface is processed by our analytics engine which generates statistical report in form of some graphs and numbers and presents that report on the dashboard.

Our Analytics engine consists of three main components:

  • Data Extraction
  • NLP Operations
  • Data Analytics

Data Extraction

In order to extract data, we used Platform API and public data provided by various social media platforms. These API can get you the data specific to a query but since all of these API are not free therefore the amount of data and they return depends upon the edition or version which we use to extract the data. One such API that we have often used is Twitter API. We have built an algorithm that downloads tweets related to keyword or hashtag as they are posted online. In addition to text of tweets these API’s also provide facility to download a plethora of data and metadata related to that tweet and the user who tweeted or retweeted that status, including, but absolutely not limited to: time, date, location, language, number of followers, number of accounts following, date of account creation, profile picture, and username of who made the original tweet and who retweeted the status.

NLP Operations

Depending upon a use case our Analytics engine performs variety of NLP Operations by using open-source software libraries like Spacy, TextBlob, NLTK. These libraries play a very important role in our analytics engine when we want to process natural language and retrieve sentiment of the text, retrieve popularity of named entity etc. Let say our use case is to find top 10 popular phones of Samsung trending in twitter then by using Named entity recognition technique of the above mentioned libraries we can easily get such information, provided we have already extracted the data using Twitter API.

Data Analytics

Our analytics engine performs analysis on the data parallelly along with NLP operations (if required). Statistical results are evaluated and visualized by it using libraries like matplotlib, plotly. It shows all results on the dashboard in an easily understandable form using graphs, charts and tables.

Framework Scope

This framework can be useful in various applications where analysis result on social media data can play a vital role in taking better decisions. Currently this framework has been tested upon two areas:

Brand Data Analysis

Our framework extracted last 30 days data from twitter and facebook using a query in our framework’s user interface. Twitter API gave us tweets and other meta-data related to Samsung brand. These were stored by our engine in a database from where it generated analytics reports.
Our Engine was able to generate following analytics reports:

  • Reports showing customer sentiments for Samsung brand by classifying sentiments into three categories as positive, negative and neutral.
  • Samsung’s popularity among its counterparts.
  • World cloud showing popular hashtags that people used in their tweets.
  • Most re-tweeted tweets about Samsung.
  • Locations which tweeted most about Samsung brand.

Political Data Analysis

Our framework extracted last 30 days data from twitter using a query which searched for data using popular hashtags related to politics. Our Engine was able to generate following analytics reports:

  • Reports showing most talked about political leader area wise.
  • Area wise sentiment analysis of people towards a political party.
  • World cloud showing popular hashtags that people used in their tweets.
  • Most talked about political topic in social media.
  • Location wise scatter plot showing region from where people tweeted.

A lot of things can be done by detecting an object in real time or in other words, can be said as video analysis like identifying a terrorist among a group of people to detecting the number of vehicles in the traffic. The real-time object detection is quite fascinating as the user is able to see the result at the same time.

For real-time object detection, the video is processed and the object detection is done through the various processes. In the above example, the video is captured through the webcam which is then the video is processed through frame by frame and image preprocessing operations like converting the image to grayscale or image thresholding or blurring on each image is performed. Then the frame is passed on to the Machine LearningModel (For this case the ML model is used for object detection) and the model identifies the object and gives a predicted result as an output frame which is replaced by the actual frame.


A typical CNN network gradually shrinks the feature map size and increase the depth as it goes to the deeper layers. The deep layers cover larger receptive fields and construct more abstract representation, while the shallow layers cover smaller receptive fields. By utilising this information, we can use shallow layers to predict small objects and deeper layers to predict big objects, as small objects don’t need bigger receptive fields and bigger receptive fields can be confusing for small objects.

Key properties:

  • Different bounding box prediction is achieved for each of the layers and hence final prediction is union of all the predictions.
  • It has the ability to detect small objects as well because we make independent object detection from multiple feature maps

In traditional machine learning, we train a model A with data and labels for a particular task or domain A. On another occasion, for a task or domain B we again require labelled data to train model B. For example, a model trained on images of cats and dogs will fail to detect pedestrians.

Transfer Learning allows us to store and use the knowledge gained in one task or domain A in some other related task or domain B. We achieve it by storing the weights of model A and initializing model B with the stored weights before training. The major advantage is that transfer learning allows us to train a deep learning model with relatively fewer data.