Updated: Nov 17, 2019
Flask is a lightweight web framework written in python which makes it easier to get started with a web application and also supports extensions to build complex applications.
Let’s divide the entire process into 2 steps:
1. Train a model
2. Deploy the trained model using flask
1. Train a model
Let us build an image classification model using Keras to identify a specific type of cactus in aerial imagery. Our model should be able to identify whether a given image contains the cactus plant. More details about this data-set
can be found in the research paper: https://doi.org/10.1016/j.ecoinf.2019.05.005
Keras is a high-level neural network library that runs on top of tensorflow. It was developed with a focus on enabling fast experimentation. ( https://keras.io/ )
The data-set has 2 folders of training and validation. Each of the folders contains a folder with cactus images and another folder with non-cactus images. There are 17500 images for training and 4000 images for testing. Keras has the ImageDataGenerator function to load images in batches from the source folders and do the necessary transformations. This is more useful when there is not enough memory to load all the images at once.
• The flow_from_directory takes a source folder as an argument and categorizes all the sub-folders as an individual class. Sub-folders in our training data are ‘cactus’ and no_cactus. So, all the images inside ‘cactus’ folder are given a label 0 and all the images inside ‘no_cactus’ are given a label of 1.
• The class_mode is ‘binary’ as there are only 2 classes here.
• Target_size converts all the images to a single desired size
• Batch_size is how many images we want to load at once.
• Color_mode is ‘rgb’ as our images are colour images.
Convolutional Neural Network (CNN) is a type of deep neural network that is most commonly used for image classification. Let’s create a CNN in Keras and train on the images
Sequential: The sequential model allows to create a model in a layer by layer structure. Each layer is connected to the previous layer and the next layer.
Conv2D: This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If it is the first layer of a model, it takes an argument input_shape which is the image shape and format. Some other arguments are
filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution)
kernel_size: An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window
activation: Activation function to use
MaxPooling2D: The role of max pooling is to downsample and reduce the dimensionality of features. The argument pool_size is an integer or tuple of 2 integers, factors by which to downscale (vertical, horizontal)
Dropout: The dropout is a technique used to address the problem of overfitting. Few units are randomly dropped from the network during training to prevent complex co-adaptations.
Flatten: The flatten layer converts entire data to a one-dimensional array.
Dense: Each neuron of a dense layer receives input from all the neurons in the previous layer.
Activation: The activation function of a node defines the output of the node. It is important for non-linear properties in the network. Generally, in the last layer of a network, sigmoid activation is used for binary classification and softmax for multi-class classification.
Loss function: A loss function evaluates how well a model is performing by comparing predicted and actual values. During training, the model improves by trying to minimize the loss function.
Optimizer: An optimizer is used to decide how fast the weights of a network can be updated (learning rate) during training. Some of the optimizers are sgd (Stochastic Gradient Descent), Adam, Adagrad, RMSprop.
Metrics: The metrics are used to observe how the model loss or accuracy changes with each epoch.
fit_generator: The train_generator is passed to the fit_generator to train the model and the accuracy is measured on validation_generator after each epoch.
steps_per_epoch: The steps per epoch should be total no. of images divided by batch_size so that each image is passed once to the model per each epoch. Suppose we have 200 training images and batch size = 50, then steps per epoch = 200/50 = 4. So, first 1-50 images are passed, then 51-100, 101-150, 151-200 which completes one epoch.
After a training model for 5 epochs, an accuracy of 92.2% is achieved, it can be further improved by fine-tuning the model.
Save the model so that it can be used in the flask application we are going to create.
2. Create a Flask App
Now, let’s create a flask application in windows to allow users to identify whether an image contains a cactus plant using the model we built above.
Open terminal, go to the desired location and create a folder to host the app
C:\> mkdir aerial_cactus C:\> cd aerial_cactus
Create a virtual environment, activate it. The virtual environment keeps the versions of the flask and other libraries independent of the other projects. It is not mandatory to create a virtual environment but it is a good practice.
C:\aerial_cactus> python -m venv venv C:\aerial_cactus> venv\Scripts\activate (venv) C:\aerial_cactus>
NOTE: In Linux/mac os, instead of ‘venv\Scripts\activate’, use the following,
Creating a virtual environment is only for the first time, later it is activated directly by venv\Scripts\activate.
We need to install all the required packages/libraries before running flask by using ‘pip install <pkg name>’. An easy way to install all the required packages at once is
C:\aerial_cactus>pip install -r requirements.txt
Where the requirements.txt file is provided in the project. If you created a project and want to make it easier for the others who use the project, create a requirements.txt file which consists of all the necessary packages by using the below command.
C:\aerial_cactus>pip freeze > requirements.txt
If there is an error like “no module named <library name>“, it means the library is missing, it can be installed by “pip install <library name>”
Create templates directory to store the HTML files
(venv) C:\aerial_cactus> mkdir templates
Flask has flask bootstrap and jinja2 template support which makes it easier to create and style html pages. Most of the pages of a website have portions that are similar, like the top bar and bottom bar of all pages of a website are always the same. So, instead of writing code for that similar parts in every HTML page, we can create a base page and extend the remaining pages from this base page.
Create base.html and index.html file inside the templates folder to provide the users with a form to upload an image and allow our model to predict.
Create a result.html to display the result of the prediction.
Create a new directory ‘models’ and store the trained model in it. Create a main.py file in the project root folder (aerial_cactus). This is the file that initializes and runs the flask application.
Bootstrap(app): This enables the HTML pages to use the flask bootstrap extension for styling.
_make_predict_function: This builds the predict() function ahead of time and makes it ready to work when it is called from several threads. (https://github.com/keras-team/keras/issues/6124)
UploadForm: Instead of creating our own HTML forms, flask provides Flask_Form which can be passed to the template. Here we are creating a form to get an image from the user.
Preprocess: The preprocess(image) function is required as the user provided image should be resized and scaled in the same way as the training set images.
@app.route('/', methods=['GET','POST']): This indicates that the homepage of our app has both get, post methods. The user enters the homepage URL, the predict() function runs and loads the index.html template. When the user uploads an image and clicks predict, the form is validated and the remaining steps are executed.
if __name__ == '__main__':
The above command makes sure that the file main.py is run only when it is explicitly called but not when it is called as an import in another file. The ‘debug=True’ is useful when the app is in development mode, it should be False when it is deployed to production server.
Run the application
(venv) C:\aerial_cactus> set FLASK_APP=main.py (venv) C:\aerial_cactus> flask run
NOTE: In linux/mac os, instead of the above commands, you should execute
(venv) C:\aerial_cactus> export FLASK_APP=main.py
(venv) C:\aerial_cactus> flask run
(venv) C:\aerial_cactus>flask run
* Serving Flask app "main.py"
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
Using TensorFlow backend.
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
As shown above, the website can be accessed now at http://127.0.0.1:5000/
When we enter the URL, the app goes to the homepage route (app.route(‘/)). As this is a GET request, the predict function at this route renders the index.html page. So, the index.html is loaded with the help of base.html and bootstrap.
Browse and select an image and click predict to get the prediction.
What happens when we click predict?
The app first checks whether the URL has a ‘POST’ method. If it has that method, then it proceeds to the next step,otherwise, it throws an error
Next, it validates the form. We made sure that the jpg,jpeg, png are the required file formats, so if a file with other format is uploaded or no file is uploaded, it throws an error.
The image from the local file system is converted to the data stream. Image.open() from PIL library is used to open the image and then the image is converted to an array
The preprocess function is applied to the image to resize and scale the image and dimensions are extended in the way the model expects its input.
The predict function is used to get the prediction from the saved model and the result is stored in the variable ‘prediction’. If the prediction == 0, the variable ‘result’ is assigned a value ‘CACTUS’, else it is assigned a value ‘NOT CACTUS’.
In order to display the image uploaded by the user, it should be sent to the result.html. So, the image is converted to base64 encoding form by using BytesIO, b64encode and then passed to the result.html along with the ‘result’ variable.
When the result.html is rendered, the relevant text is displayed based on the value of the ‘result’ variable. The image is generated from the encoded string and displayed to the user.
The application can be further extended so that instead of uploading an image, a URL of an image is provided.
About Data Science Authority
Data Science Authority is a company engaged in Training, Product Development and Consulting in the field of Data science and Artificial Intelligence. It is built and run by highly qualified professionals with more than 10 years of working experience in Data Science. DSA’s vision is to inculcate data thinking in to individuals irrespective of domain, sector or profession and drive innovation using Artificial Intelligence.
Data Science Authority | Data Science Training in Hyderabad