Getting Started with Alibaba Cloud — Object Storage Service (OSS) for Static Files and GCP Compute Hosting.

Abhik Banerjee
13 min readMar 28, 2021

This article is a part of my “Getting Started with Alibaba Cloud” series which features 5 articles published on Medium and DConCloud. This is the final article in the series. The list of articles published till now are:-

  1. Getting Started with Alibaba Cloud — Resource and Access Management for Better Practices — 1
  2. Getting Started with Alibaba Cloud — Resource and Access Management for Better Practices — 2
  3. Getting Started with Alibaba — Alibaba ECS
  4. Getting Started with Alibaba Cloud — Object Storage Service (OSS) — 1

If you have been following the above blogs, by now you would already have a RAM User, an OSS Bucket and would know how to deploy an ECS instance in Alibaba Cloud. With this article, we finish our series of “Getting Started with Alibaba Cloud”. However, there’s a twist. We are not going to be using Aliyun ECS at this time. We will be delving into “multi-cloud” a little bit. We will be using Alibaba Cloud and Google Cloud Platform for this particular tutorial.

If you are not aware of what “multi-cloud” is, then consider this — you are the client and you want your solution to always have the best. That “best” might not be necessarily limited to only one cloud vendor. For instance, Google Cloud Platform is one of the best (if not the best) place you’d want your Kubernetes Workloads to be working. Similarly, you’d want RAC on OCI. Perhaps you would like to use DynamoDB in your solution as well — that is where AWS comes in. So at the end of the day, you don’t have just one cloud vendor. Your solution is resting on multiple cloud providers’ services. That is “multi-cloud”. (Be wary that this is not “Hybrid-Cloud”. Hybrid Cloud is when you merge Private Cloud with any Public Cloud Provider)

We have our OSS bucket in Alibaba Cloud. We are going to use the RAM user’s Access Key that we generated to develop a NodeJS server that will use Signed URL to query the Images and will be hosted on a Compute VM Instance on Google Cloud Platform.

So without further ado.

OSS Bucket Images.

At this time, we have uploaded 3 images beforehand on the same bucket. We are now going to change the Access Control List to show you how your data can be made available from your bucket or keep it to yourself.

In the screen shown above, we have selected the 3rd object in the bucket (an image titled “abhik_banerjee.png”) and then click on “More” > “Set ACL”. Then we have set the ACL of the image to “Public Read” which means that anyone can read it without authentication. To view the change, after clicking on “Ok” just click on the image to see its details. You will see something similar to what is shown below.

Notice the “URL” field? It’s substantially shorter than what we have generally seen, right?

You can check the ACL of any other image in the Bucket. Since the bucket was made as a Private bucket, all the images or any objects uploaded inherit this ACL itself originally. You can manually set this ACL Value to any other one that you may want. To check this, just click on any other object and see the ACL in the same way. Here, we check the first image as shown below.

As you can see in the figure above, the image is “Private” by default. Close this blade and then click on the image name. You would see the Image details shown in a similar fashion. You should notice an immediate difference here. Look at the image below and refer to the second image in this article.

The “URL” is longer than before, right? That’s because this is a “Signed URL” we will discuss this later in the code section in better detail. Long thing short, this URL contains a “key” so to speak which is checked against the “lock” when you or someone else tries to access it. Next, we get down to the coding section. Let’s code a small NodeJS Server which uses the Alibaba Cloud OSS SDK to query the images and display them on the screen.

Coding your NodeJS Server for OSS.

The code for this demo is available on my GitHub. You can check it out at this link. You can clone the repo at the link to get started with the Demo in a moment if you want to.

Here, we will be using ExpressJS to build a NodeJS server that serves Pages when visited. We have used Handlebars as the Templating engine here. Before beginning, you should do the following:-

1. Create a directory called “config” in the root of the project and create a file “access_key.json”. This directory will be ignored by default. It is a very important directory and must always be kept private.

2. Post the AccessKeyId and AccessKeySecret of the RAM User (you should have a CSV Downloaded for the same if you followed the previous article).

Next, let’s have a rundown of the basic things.

This figure shows the Directory Structure of the Project. The “index.js” is the heart of our project. This is where the Express server is made. All the other modules are imported into this file. This file should be kept as short as possible in a real case. In lines 1 to 5, we import Express and then instantiate it. The port is defined as “3000”. We also import “controllers” which would be used as handlers for the routes.

From lines 7 to 17, we import Handlebars and set them as the view engine. We also set ‘public’ inside the ‘src’ directory as the place where express should look for static files like CSS and JS. The Layout directory for handlebars is set to ‘views’ inside ‘src’. Additionally, we also make it so that the files with the extension ‘.hbs’ are rendered. These ‘.hbs’ files are the substitute for your typical ‘HTML files.

Lines 20 to 23 define the routes and start the server. There are 2 routes — the root (‘/’) or the home where the browser is directed first. Here, the buckets that have been made by the RAM User are displayed. The ‘/image’ route is where the image from the bucket are displayed. Next, we shall have a look at the main.hbs file inside the ‘views’ directory as shown in the figure below.

We are just putting in a clever trick here. As you can note that this is mostly like your average HTML except for some special tokens like {{#each}} and {{#if}}. These are courtesy of Handlebar templating. We are checking here if the “image” property is sent as data during rendering and then choosing to render a section. Same for “buckets”. As you will see in the controller section, these are data that we get using the Aliyun OSS SDK.

If you would scroll up in the header section, you will notice that we have inserted the link to the image that we made public in our bucket in the previous section. That image is readily available through the link without the need to authenticate which is why during rendering on the browser, it is shown directly. Next, let’s have a look at the controllers.

Browse to the “buckets.js” inside the “src/controller” directory. What we are trying for is to stick to the MVC architecture of yore. Though in such a dev scenario, you do not have any strict restriction, sticking to a convention helps organize your code files and keep your code clean.

The “buckets.js” is important in this demo for many reasons. First, it imports the Access Key ID and Access Key Secret from “config/access_key.json”. Next, it initializes the Alibaba Cloud OSS SDK with the credentials (Lines 3 to 9). Notice how we are passing in the bucket name as well. This is not a required parameter. The region parameter is not required as well but for the sake of the demo, we initialize using these.

Next from lines 12 to 20, we create the controller for the ‘/’ path of the site. What this does is that it uses the OSS client we made in lines 4 to 9 to get a list of the buckets. The name, region, owner and such details are returned in the “buckets” property of the “result” variable which we declare in line 14. Note that you need to have a basic understanding of Promise/Async-Await Syntax at this point. Next, we pass the data to the “main.hbs” which we looked at in the previous image. This gets rendered in the browser.

From Lines 23 to 40, we make another controller called “getImage” which handles the ‘/image’ route on the site. We generate a signed URL of “01_thumbnail.jpg” and “01.jpg” files which are in the bucket. Did you notice the long URL formats for the Private objects in the previous section? The thing is since they are private objects, you need an access key that expires in a limited time to be attached to the URL. This says that this request comes from you and it can be allowed. We pass these URL links to the “main.hbs” to get rendered.

Given below is the view of the package.json file. If you are new to NodeJS, package.json files keeps a track of the details about the project. It records things like which modules/libraries have been installed and are being used.

You can run “npm i ali-oss” to install the Alibaba Cloud OSS SDK for NodeJS separately. However, if you wish to run the project locally, go to the root directory of the project and run:-

1. “npm install” — This installs the dependencies of the project. So all the libraries are installed in one go.

2. “npm test” — This starts the Dev Server using Nodemon.

You can open your browser and browse to “localhost:3000” to see the website. For now, stop the server using “Ctrl+C” as we are now going to host this on a GCP VM Instance.

Hosting on VM Instance.

You can go ahead and use Alibaba Cloud’s ECS if you want to. The steps are similar to the ones here. In this article, we will use Google Cloud Platform to host the NodeJS server on a Compute Instance. First, as shown below, we are creating a new project titled “alibaba-oss-demo”.

This will take some time to create. Once the project is created, you will be directed to the project’s GCP Console. Which should look like something shown below. Click on the hamburger menu icon which appears at the top left corner and then select scroll down to the “Compute” section. Choose “Compute Engine” and then “VM Instances” as shown below.

Since this might be the first time that you are trying to deploy a VM Instance, it will take some time. The Compute API is activated the first time you browse to this section. Click on “Create” (the blue button which appears around the centre of the screen). This will direct you to the New VM Instance. Notice this section is different from Aliyun’s ECS. You get to define all the parameters of the VM here to create the VM (refer to the image below).

As you can see, we are trying to create a VM named OSS Demo. Most of the options here are kept to default. Make sure to check the box which enables HTTP and HTTPS access for the VM. This tags the VM with 2 default network tags of “http-server” and “https-server” to which the Default Access Firewall Rules can act.

While for this demo it is not relevant to pass any Metadata, keep in mind that the way we are passing the Access Key and the Secret is at all considered “best practice”. Consider the scenario when you have a managed group of VM Instances (autoscaling) behind a load balancer. Would you upload the keys into it every time? Would you keep the config directory inside the project? The option is inconvenient and the second is insecure. Metadata may be used where these values are passed in as an Environment variable. In NodeJS, you can access them using the “dotENV” library. A better practice would be to put these keys in a Private Cloud Bucket and then create an authorized Service Account User to access it. While there are even better practices, we won’t look into those in this beginner-oriented article.

Once you click on create, the VM will be created. SSH into it and install NodeJS by running the following:-

sudo apt-get update

sudo apt-get install nodejs

sudo apt-get install npm

After these complete, you can try out “npm -v” and “node -v” to make sure that these are installed. Next, clone the git repo into the VM Instance (you may need to install Git in the VM using a similar process). After this, browse into the project and make a “config” directory. Run

cd dconcloud-OSS-Demo

mkdir config

Upload the “access_key.json” inside your local copy of the project’s config to the VM and then move it into the config directory by the command below:

mv access_key.json config

Next, run “npm install” and “npm test” successively. This should show that the server is running on port 3000. Return to the GCP VM Console and click on the external IP allocated to the VM. It opens a new tab. However, you should see something similar to what is shown below.

This is because the HTTP and HTTPS access to the VM is restricted to ports 80 and 443. We will create a new firewall rule here to allow access to port 3000 as well. For this, click on the hamburger menu on the top left corner and click on “VPC Network” in the menu. Then click on Firewall. Here you will see all the Firewall Rules listed here. Click on “Create” to create a new firewall rule.

Refer to the image below for details.

As you can see, we are defining a firewall rule called “default-allow-http-3000”. We specify the “Targets” in the section below as “Specified target tags”. This means that the rule does not apply to all compute type resources. Next we specify the “Target Tags” as “http-server”. Remember that this tag was automatically applied to the VM when we selected “Allow HTTP Access” during VM Creation.

Next, we specify the Source filter to be IP Ranges and as the Source IP ranges we specify “0.0.0.0/0”. This is a CIDR Block notation for all the worldwide web. Then in the “Protocols and Ports”, we specify the “3000” port for the TCP protocol (designated as “tcp” in the figure above). Click on create to finish this process.

What we have done just now is that we have told GCP to allow traffic from the internet (the “source”) to have access to all the Compute Resources bearing the Network Tag “http-server” (the “target”) on port 3000 with the TCP protocol with HTTP.

Next, go to the external IP once more and change the protocol from HTTPS to HTTP and the link to direct at 3000. In my case, the external IP is 34.69.44.19 so the URL turns to be “http://34.69.44.19:3000” (refer to your cloud console for your instance’s external IP). You should see something similar as shown below.

So my handsome face appears (XD) along with the details of the bucket we had created in the previous tutorial. This is the root “/” or home route. Next, clicking on the “Get Image” button should open something as shown below.

You can verify that the image shown in the site is obtained from a “Signed URL” by clicking “F12” if you are on Chrome and browsing to the image element of my face (which is a public object) and then to the scenic image’s image element (which is a private object).

As is mentioned in the image above, this brings us to the end of our tutorial. This article also concludes the “Getting Started with Alibaba Cloud” series. While we did deploy the server on a GCP Compute Instance, the process for doing it on Alibaba Cloud’s ECS is similar. Hopefully, this also gives you a taste of the “best of all worlds” mantra of “multi-cloud”.

Before we end this article, we would like to say that if you have any suggestions or any requests on for a tutorial video or article on a specific cloud-related topic, feel free to send a mail to Chirag Nayyar at chirag@chiragnayyar.com and/or me at abhik@abhikbanerjee.com.

You get in touch with me on Twitter or LinkedIn. Until then, Stay safe n’ well, keep your cloud fundamentals clear and as we say at DC on CloudKeep Calm n’ Cloud!

--

--

Abhik Banerjee

Freelancer, Avid Research Paper Reader. Ask me about:- Bioinformatics, Blockchain, GCP and ANIME.