Eren Akbulut's Blog

Full-stack Sentiment Analysis Application with React, Flask and Tesseract

January 12th, 2021

Hello everyone, in this post I'll try to demonstrate a full-stack AI based web application with React frontend and Flask backend. Nearly all the frontend code for this project comes from this repository so don't forget to go ahead and drop a star to the original OCR React code of the project. Actually as you can see the final version of the project is also a fork of the React repo, I just added the backend and sentiment analysis, changed the folder structure and rebased the project.

The ultimate goal of this project to show you that can use perfectly engineered developer friendly tools from the awesome community to tackle some truly hard problems in CS and you don't have to get your hands dirty if you don't want to basically.

I'll not explain the code line by line in this tutorial, instead I'll explain how each part works briefly and then you'll have a base image for creating similar image base sentiment analysis applications with the support of AI.


React and Tesseract

As I've explained above I'll not walk you through on the each step of the application, I'll however explain how you can get this code and make it run on your own local environment.

First we need to clone the repo, we can get it from the final version link here ( same as above ) then we can either download the zip version or you can clone it via git.

After that folder structure should look like the one below.

sentiment-fs-folder-structure

We should go ahead and install the npm packages first, for that we can go into "react-tesseract-ocr" folder and run the following commands:

npm install

npm start


After if everything works just fine we should see a page like that one below.

react-tesseract-plain

On this page actually we can run OCR(Optical Character Recognition) with the any image we would like to try. OCR is actually quite complicated as a concept and running such project from scratch is absolutely a nightmare ( at least if you don't have much experience just like me). The repo we forked is using Google's Tesseract library and that basically automates all the OCR process for us, actually a JS port of the original Tesseract.

The only place we'll change for the tutorial is this part below.

react-tesseract-place-to-change

This part is where the original author of the React project used to print extracted text from the picture. We replaced that part with a "POST" request to the given endpoint at port 8080 and added the extracted text into the body of the request basically.

From that point we can quickly move to our backend.


Flask and Naive Bayes

For the sake of simplicity I also moved over the part where we usually take the data and train the model. Usually that's should be the painful part of such a project and I'm not trying to deny this fact with my approach. In fact you can go ahead and train your own model by using the tutorial in the link. I personally did it that way in some time ago.

So in order to run the backend we have 2 dependencies installed in our Python environment, nltk and Flask from the following links ( They are conda guides because I'm using Anaconda in my local environment, if you want to learn more about Anaconda)

As you can see we have our model in the root directory of "sentiment-backend" folder. After we have all our dependencies running the server is quite easy. All we need to do is to run "python server.py" and our server should up and running.

Our server.py file looks like this one below.

fullstack-sentiment-backend-code
  • First we pick up our model and create our classifier

  • Then we initialize the Flask and the CORS. CORS stands for Cross-Origin resource sharing and we need that for sharing information between our Flask backend with React frontend. More about CORS here.


Testing Our System

I added 1 positive and 1 negative text containing images in the Frontend directory. We can simply go ahead and select one of them from the UI now.

react-ocr-file-upload

After we select our image to apply OCR and then sentiment analysis, our React app automatically start to do the OCR, after everything is resolved if our system can provide a meaningful text we can receive a response from the server.

ocr-positive

As we can see below our system can understand that our image contains a positive text. Now let's try a bad one.

ocr-negative

It says it's negative and I agree. I mean I might suck but not this model so far :P.


That's it everyone, I hope that this tutorial is helpful to you, I really believe it will be very useful for many people this time.

Until next time, take care. :)

This blog has been created by Eren Akbulut 2020