Tezeract-preloader Tezeract-preloader

How to deploy your AI Server

deploy AI server
Content

Deployment of AI Server on EC2 instance

If you work in any of the Artificial Intelligence domains, it will be in your daily routine to develop some exciting and crazy AI models. Either you develop it from scratch, modify the previous work or use any libraries or framework to build a smart solution, Agree? 

After building that AI model, you might think of sharing it with your friends and colleagues to experience what you have developed, but soon you realize that you can’t bring your laptop or computer system to everyone to experience your fantastic AI program! Disappointed?

You have come so far, so please don’t feel disappointed, We will help you to share your efforts all over the globe by hosting application on AWS EC2 instance! So, let’s move forward and make it possible, together.

Before starting Deployment of AI Server on EC2 instance, you must have to build the API to communicate with your ML program/model. If you haven’t built the API yet, kindly overview the article “How to integrate Flask API with ML Models

Prerequisites

The prerequisites for the deploy application on ec2 instance are: 

  1. Active AWS Credentials (Root or AMI user with access) 
  2. Working Python Server integrated with ML model within the API 

Step to deploy AI server on AWS EC2 instance

The main problem which I faced during the initial phase of my career in deployment is, I didn’t find all the steps that needed to be followed for successfully deploy AI server on ec2 instance. I had to open a number of tabs in different browsers to get this task done.

I have tried to mention all the steps to deploy ai server on ec2 instance that can be followed by the beginner as well to deploy the AI model/program with Flask-Python. 

Step.01

At the very first, Open AWS Management Console and select EC2 from AWS Services.

How to deploy your AI Server Tezeract

Step.02

Once you’re on the EC2 Instance dashboard, Select Instances (running).

Step.03

Now you’re ready to launch the instance, so select the orange button named ‘Launch Instance’ and move to the next step.

How to deploy your AI Server Tezeract

Step.04

As a developer, you already know that Ubuntu has an edge compared to any other OS. I would recommend selecting ubuntu 18.04 LTS or 20.04 LTS, a free tier eligible machine. 

Step.05

While creating an instance, you will be asked to choose an Instance Type, and select it according to your requirements, once selected press next to jump on the next step. 

How to deploy your AI Server Tezeract

Step.06

Default storage would be selected on the basis of the instance type you have selected in the previous step. Add the storage now if needed. I would recommend increasing it up to 20GB. In this example, I increased it up to 40GB.

Step.07

Security groups are essential rules of your instance and it will only allow traffic based on what you have added in the rules. 

It is not one of the best practices to allow ‘All traffic’ to your instance, as I have done in the attached example, but as it’s a beginner guide, you can follow the same rules for now, or add it according to your requirement. Once you are done with adding all the necessary rules, you can press the Review & Launch button. 

Step.08

I know it’s a long process but, The wait is OVER! You can now review and press the Launch button. 

How to deploy your AI Server Tezeract

Step.09

Now, create a new key pair and put the unique name for your instance key, then finally download the key pair. After pressing the download button, you will find the key with the selected name in your Downloads directory. 

Before moving to the next step, take a breath and focus on this important point, this key that you just downloaded is the most valuable thing and the same as the password of your instance. If you lose it, IT’S ALL OVER! 

Step.10

Once the instance is launched successfully, you can come to the main instances page and wait for a while so the instance will set up and show 2/2 checks passed in the status check column.

Step.11

Select the row of your instance and name it (Optional but preferable), and press the Connect button, once you press the button, navigate to the SSH client tab.

Step.12

Now, open the terminal of your local system inside the directory where you have put the ‘.pem’ file which was downloaded, and then paste the ‘chmod’ command to make the key private.  

Step.13

Copy the example command mentioned to build the connection of your EC2 instance with your local machine.

How to deploy your AI Server Tezeract

Step.14

Paste the above-copied command in the same terminal, then type ‘yes’ and press enter for making a successful connection. 

How to deploy your AI Server Tezeract

Hurry! You’re now inside your EC2 instance machine!!!

How to deploy your AI Server Tezeract

Step.15

As the server is built on Python, you will need Anaconda. Install and Setup Anaconda.

  1. Install and Setup Anaconda
  2. sudo apt-get update
  3. cd /tmp
  4. wget wget latest anaconda repo link (or avaialble latest version)
  5. bash Anaconda3-2020.02-Linux-x86_64.sh (press enter, yes, and then enter again)
  6. Type yes here and press enter.
How to deploy your AI Server Tezeract
  1. source ~/.bashrc
How to deploy your AI Server Tezeract
  1. cd ~
  2. conda update conda
  3. conda update anaconda
  4. Create the conda environment for your project by conda create -n ‘name’ python=3.7 ( python=3.7 is optional or according to requirement, the name is the env name )
  5. conda activate ‘name’
  6. Make the directory for your AI-Server ( mkdir ‘dirname’ ), and then clone the code inside that directory.
  7. mkdir dirname ( Command to create the directory )
  8. cd dirname  ( Command to change the directory )
  9. git clone [ https://… ]  ( Command to clone the code inside the directory ) ( Clone your python code here
  10. Install all the requirements by pip install -r requirements.txt
  11. Install Gunicorn (pip install gunicorn)
  12. Run your python app with gunicorn by gunicorn run:app –daemon 

Step.16

Got tired from the long list of steps? Apologize, but the good news is, your app is already deployed and running!!! And now you can test it via postman and share your API or set of APIs with your friends and colleagues.

On postman, instead of localhost, you can put the Public IPv4 with the port number mentioned in your python code!

Conclusion on EC2 Instance

In Tezeract’s this article, you have learned deployment of AI Server on EC2 instance integrated with the Flask-Python server. I have tried to mention every possible detail with screenshots for a better understanding considering the problem faced by the beginners in this field.

Hopefully, you guys found this article fruitful!

In the next part, I will be trying to solve the problem of installing SSL and setting up Nginx for your instance on which the Flask-Python server is running, so that you guys can make your API requests secure!

Abdul Hannan

Abdul Hannan

Co-Founder & CEO
Share

Suggested Articles