Redis Message Queue - Implementation - Commands

Redis Message Queue - Implementation - Commands

Redis Message Queue - Implementation - Commands
In this blog post, I will discuss the Redis Message Queue, Stream type of Redis. Redis has introduced a new data type called Stream type since 5.0, which is specially designed for message queues. There are many more message queues such as RabbitMQ, Kafka but here we will just discuss how to implement Redis as a message queue.

In a distributed system, when two components want to communicate based on a message queue, one component sends a message to the message queue, which we call a producer, and the other component consumes the message, and then processes it, which we call a consumer. As shown below:

The message queue is especially to clear up the trouble of processing inconsistent competencies among producers and consumers. It is often an indispensable middleware in large factories. Redis had a message queue function based on publisher and subscriber (pub/sub) before 5.0.

Redis has a disadvantage that when there is a Redis downtime, network disconnection, etc., messages get discarded. However, Redis Stream provides message persistence and master-slave replication functions, allowing any client to access the data at any time, and remember the location of each client’s access, and ensure that the message is not lost.

Redis Message Queue Commands

1. XADD

ADD infotipsnews * 1 hello
XADD is used to insert a message into the message queue(In the Current Example message queue name is infotipsnews). The key of the message is the 1 and the value is “hello”. The “*” after infotipsnews auto-generate globally unique ID. 

It is automatically generated for the inserted message, 1631288930852-0, the first half of 1631288930852 indicates the UNIX time in milliseconds of the server, and the second half of 0 is a message sequence number in order to distinguish messages which are delivered at the same time.

2. XTRIM

XTRIM infotipsnews maxlen 100
It is used to remove older entries from the message queue based on parameters such as MAXLEN or MINID. When the Stream reaches the maximum length, the old messages will be deleted. In the above example, if the stream reaches a maximum length of 5 then it will delete the old message. 

Due to the internal implementation mechanism of the stream, and accurate setting of an upper limit of length will consume more resources, so we generally adopt a fuzzy setting method: XTRIM infotipsnews maxlen ~ 5 , which means that the length can exceed 5, which can be 6, 9, etc., It is up to redis to determine when to truncate.

3. XLEN

It returns the number of entries inside the stream. In the above example, XLEN returns the length of the message queue i.e. infotipsnews.

4. XDEL

It is used to remove specific entries from the message queue. In the above example, the command Indicates to delete the message with ID 1631288930852-0 in the message queue infotipsnews.

5. XRANGE

XRANGE infotipsnews - +
It is used to read the message. “$” represents the latest message, and “block 10000” is blocking time in milliseconds i.e. 10s. In the above example, XREAD is reading a message, if no message arrives, XREAD will block for 10s and then return NIL. If a message arrives within 10s then the message is returned.

6. XREAD

XREAD block 10000 streams infotipsnews $
It is used to read the message. “$” represents the latest message, and “block 10000” is blocking time in milliseconds i.e. 10s. In the above example, XREAD is reading a message, if no message arrives, XREAD will block for 10s and then return NIL. If a message arrives within 10s then the message is returned.

7. XGROUP

XGROUP CREATE infotipsnews mygroup 0
XGROUP is used when you want to create New Consumer, Destroy a Consumer Group, Delete Specific Consumer etc. In the above example, I have created a consumer group mygroup for the message queue infotipsnews, 0 means to read from the very beginning position. 

In Order to Destroy the Consumer execute XGROUP DESTROY infotipsnews consumers.

8. XREADGROUP

XREADGROUP group mygroup consumer1 streams infotipsnews >
XREADGROUP is a special version of XREAD with the support of consumer groups. In the above example, consumer1 in the consumer group mygroup reads all messages from the message queue infotipsnews, where “>” means to start reading from the first unconsumed message. 

It should be noted that once the message in the message queue is consumed by a consumer in the consumer group, it can no longer be read by other consumers in the consumer group. The purpose of using consumer groups is to allow multiple consumers in the group to share and read messages. 

Therefore, we usually let each consumer read part of the message so that the message read load is evenly distributed among multiple consumers.

9. XPENDING

XPENDING infotipsnews mygroup
In order to ensure that consumers can still obtain unprocessed messages after a failure and restart, Streams will automatically use an internal queue to store the messages read by each consumer in the consumer group until the consumer uses the XACK command to notify Streams, The message has been processed. When the consumer restarts, you can use the XPENDING command to view the messages that have been read but have not been confirmed.

10. XACK

XACK infotipsnews mygroup 1631289246997-0
It means that the consumer group mygroup has confirmed that it has processed the message with id 1631289246997-0 in the test message queue.

So far, we have understood the usage of using the Stream type to implement message queues.

Why do we use Redis as a message queue

To use message queues, you should use special message queue middleware such as Kafka and RabbitMQ, and Redis is more suitable for caching. In fact, I think that the technology used is related to the application scenario you are currently encountering. 

If your message communication is not large and you are not sensitive to data loss, then using Redis as a message queue is a good way. After all, Redis is compared to Kafka. For professional messaging systems, it is more lightweight and has low maintenance costs.

Docker Compose file vs Dockerfile - Differences - Explained

Docker Compose file vs Dockerfile - Differences - Explained

Docker Compose file vs Dockerfile - Differences - Explained
The Dockerfile is used to build a custom image and does not directly generate a container. It’s just that you can run the container while running the image.
The container orchestration to deploy the environment is done using the docker-compose.yml file, which may require a Dockerfile.

Dockerfile is used to build a mirror. If you want to use this mirror, you need to use the docker run command to run the mirror to generate and run a container.

If you have multiple containers and each container is dependent on another container then with docker-compose we can link.

Dockerfile

Write each layer of modification, installation, construction, and operation commands into a script. Use this script to build and customize the image. This script is the Dockerfile.

Dockerfile part of the instructions:

// FROM Specify the base image
FROM nginx

// Excuting an order. Each RUN will generate a layer, and use && as much as possible for a requirement, so as to reduce the RUN, that is, reduce the layering 

RUN echo '<h1>Hello, Docker!</h1>' > /usr/share/nginx/html/index.html
RUN yum update && yum install -y vim python-dev

// COPY: Copy package.json in the source path to the mirror path /usr/src/app on the new layer
COPY package.json /usr/src/app/

// WORKDIR Specify the working directory. Specify the lower-level working directory as /data in the container, try to use an absolute directory
WORKDIR /data

//ADD can automatically decompress files. The following example finally hello under /data/test
ADD hello test/ 

// Copy is similar to ADD, except that files cannot be decompressed
COPY hello test/

// CMD Excuting an order
CMD ["node", "index.js"]

// ENV Set environment variables and define NAME=Happy Feet, then you can use $NAME to execute

ENV VERSION=1.0 DEBUG=on NAME="Happy Feet" // VOLUMES mount


// EXPOSE Port exposure

EXPOSE <Port 1> [<Port 2>...]

Dockerfile example


// 1、创建 Dockerfile
mkdir mynginx
cd mynginx
vim Dockerfile

// 2、输入以下内容并保存:
FROM nginx
RUN echo '<h1>Hello, Docker!</h1>' > /usr/share/nginx/html/index.html

// 在 Dockerfile 目录下执行,生成新的自定义 images
docker build -t nginx:v3 .

Docker-compose

docker-compose is an official open source project, responsible for the rapid orchestration of Docker container clusters and the deployment of distributed applications. Define a group of related application containers as a project through a single docker-compose.yml template file (YAML format)

Install docker-compose

It has been installed by default under Mac and Window. However, it must be installed manually under Linux. The binary package is used here

sudo curl -L https://github.com/docker/compose/releases/download/1.26.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

// Test docker-compose
$ docker-compose --version

General steps

1. Create an empty directory.
2. Define Dockerfile to facilitate migration to any place
3. Write docker-compose.yml file
4. Run docker-compose up to start the service

Examples of using docker-compose

Next, we use Node JS to build a website that can record the number of page visits.
1. Create an empty directory: mkdir -p /data/test
2. Create index.js under the empty file and enter the following. In createClient() you can apply a hostname that you have mentioned in docker-compose.yml

const express = require('express');
const app = express();
const redis  = require('redis');
app.get('/count',async function (req,res){
    const client = redis.createClient({host: 'redis-server'});
    return client.get('visit',(err, count)=>{
        count=Number(count)+2;
        return client.set(['visit',String(count)],function(err){
        return res.status(200).send(`Total Visit ${count}`);
        })
    });
})
app.get('/ping',(req,res)=>res.send("OK"));
app.listen(3001);
3. Write the Dockerfile file:

FROM node:14
ADD . /code
WORKDIR /code
RUN npm install express redis
COPY . ./code
CMD ["node", "index.js"]
4. Write the docker-compose.yml file

version: '3'
services: 
  redis-server:
    image: 'redis'
  web: 
    build: "."
    ports: 
       - "5001:3001"
    depends_on:
      - redis-server     
    links:
      - redis-server   
5. Execute the docker-compose project

docker-compose up

Description of yml template file:


version: '3'
services:
    phpfpm:
    image: yoogr/phpfpm:0.0.1
    container_name: ct-phpfpm
    build:
      context: .
      dockerfile: Dockerfile
    expose:
      - "9000"
    volumes:
      - ${DIR_WWW}:${DIR_WWW}:rw
      - ./conf/php/php.ini:/usr/local/etc/php/php.ini:ro
      - ./conf/php/php-fpm.d/www.conf:/usr/local/etc/php-fpm.d/www.conf:rw
      - ./conf/supervisor/conf.d:/etc/supervisor/conf.d/:ro
      - ./log/php-fpm/:/var/log/php-fpm/:rw
      - ./log/supervisor/:/var/log/supervisor/:rw
    command: supervisord -n
    links:
      - mysql:mysql
      - redis:redis
Each service represents a container. The container can be created through the image of dockerhub, or it can be created from the image built from the local Dockerfile. If a service needs to use the image built by Dockerfile, specify the text location and file name of the build-in docker-compose. The yml file can specify volume and network. 

The following is an example of using network and volumes parameters (placed in the same layer relationship with service):

version: '3.0'
services:

  wordpress:
    image: wordpress
    ports:
      - 8080:80
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_PASSWORD: examplepass
    network:
      - my-bridge

  db:
    image: mysql:5.7
    environment:
      MYSQL_DATABASE: wordpress
      MYSQL_ROOT_PASSWORD: 123456
    volumes:
      - mysql-data:/var/lib/mysql
    networks:
      - my-bridge

volumes:
  my-data

networks:
  my-bridge:
    driver:bridge
The definition of network and volume is similar to docker network create and docker volume create. If you do not specify the connection network, then link will be used. 

Hope You like our “Docker Compose file vs Dockerfile” post. Please subscribe to our blog for getting updates on email for the upcoming blogs.
Node Js Router (With Express.js) - Explained

Node Js Router (With Express.js) - Explained

Node Js Router (With Express.js) - Explained
Node Js Router module is used to create endpoints for applications for handling client requests. A route is a chunk of Express code that links an HTTP verb (GET, POST, PUT, DELETE, etc.) to a URL path/pattern and a function that handles that pattern.

Routes may be created in a variety of ways. We’ll be using the npm module express in this lesson. Router middleware is useful since it allows us to aggregate route handlers for a certain section of a website and accesses them using a single route prefix. We’ll store all of our library-related routes in a “catalogue” module, and we’ll maintain them separated if we add routes for managing user accounts or other tasks.

Node Js Router module

In a module called Login.js, we first construct Login routes. The code imports the Express application object uses it to acquire a Router object, and then uses the get() function to add a handful of routes to it. Finally, the Router object is exported by the module.


var express = require('express');
var router = express.Router();

// Login page.
router.get('/', function (req, res) {
  res.send('Login page');
})

// Register Page 
router.get('/about', function (req, res) {
  res.send('You can view Dashboard');
})

module.exports = router;
We call an instance of express.router(). we use those routes to view the Login page as well as Dashboard. we can access the Login page at https://localhost:3001/ and Dashboard at https://localhost:3001/about,


var Login = require('./Login.js');
// ...
app.use('/Login', Login);
In order to utilize the router module in our main app code, we must first require() it (Login.js). The Router is then added to the middleware processing path by using use() on the Express application with a URL path of ‘Login’.

Now we set a middleware as Login.js. Now our URL has been changed https://localhost:3001/Login/ and https://localhost:3001/Login/about.

We can create such routes more and then apply them to our application, this is a very powerful feature. A Router could be used for basic routes, authenticated routes, and API routes.

By generating numerous instances of the Router and applying them to our apps, we can make them more modular and versatile than ever before. We’ll now look at how to process requests with middleware.

Route Paths

The endpoints at which requests can be made are defined by the route routes. ‘/’, ‘/about’, ‘/book’, and ‘/any-random.path’ is the only strings we’ve seen so far, and they’re utilized precisely as stated.

String patterns can also be used as route routes. To create patterns of endpoints that will be matched, string patterns utilize a type of regular expression syntax. The syntax is as follows (note that string-based paths interpret the hyphen (-) and the dot (.) literally):

+: The endpoint must contain one or more of the previous characters (or groups), for example, a route path of ‘/ab+cd’ will match abcd, abbcd, abbbcd, and so on.

(): ‘/ab(cd)?e’ performs a?-match on the group (cd)—it matches abe and abcde.

?: The previous character (or group) must be 0 or 1, e.g., a route path of ‘/ab?cd’ will match endpoints acd or abcd

*: The * character may be placed anywhere in the endpoint’s string. Endpoints abcd, abXcd, abSOMErandomTEXTcd, and so on will be matched by a route path of ‘/ab*cd’, for example.

Conclusion

We now have more flexibility than ever before in creating our routes thanks to the integration of the Express 4.0 Router. To summarise, we can:
  • To define groups of routes, call express.Router() multiple times.
  • Use app.use() to apply express.Router() to a portion of our site and handle requests with route middleware.
  • To validate parameters using .param, use route middleware ()
  • To define numerous requests on a route, use app.route() as a shortcut to the Router.

With all of the different methods, we can create routes, I’m confident that our applications will improve in the future. If you have any questions or recommendations, please leave them in the comments section.
How to make a Graphql API in Express Framework - Easily

How to make a Graphql API in Express Framework - Easily

How to make a Graphql API in Express Framework - Easily
Graphql is booming up! It is growing very fast. Even I am using it in almost all of my projects and I am loving the design pattern. In this post I am going to show you how you can make a Graphql API using express(popular nodejs web framework) and will be using popular Apollo Graphql tools to make the thing happen.

What is Graphql

Graphql is sort of SQL like replacement for RESTful APIs. In Graphql you make queries instead of endpoints and ask your server to provide only the parts of the data you want. There is only one endpoint exposed in Graphql that handles all sort of things. 

Official definition(from docs):-GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. In Graphql you define you data structure in a schema sort of like you define tables in a SQL database which you can manipulate using the operations provided the Graphql.

There are mainly three types of operations in Graphql. Instead of a GET request you make a query. In place of POST, PUT or DELETE request you make a mutation. As name suggests a mutation in Graphql is the operation which modifies the data on the server. 

The third type is called subscriptions which is used to make real time connection through sockets. It uses a pubsub mechanism. These three are also called root schema types.

While building an API for handling a Bookstore in that first we define the schema using different types and then resolve the types by fetching the data from a source like a database(MongoDB or postgres maybe). You may even use a existing RESTFul API as a source and wrap it in a Graphql API.

Prerequisites for the tutorial

As usual you will need:
  • Node js (I recommend getting version 8 LTS but dont use less then 7.6).
  • MongoDB (Although you can use any other DB just use you resolution logic).
  • I recommend having yarn installed.
  • A Good browser.
  • I also recommend a Graphql plugin for you text editor. VSCode has some.

Install Dependencies

As expected we will start by bloating our holy node_modules folder with dependencies.

$ yarn init -y # or npm init -y
$ yarn add express cors graphql graphql-tools apollo-server-express mongoose # or
$ npm i express cors graphql graphql-tools apollo-server-express mongoose --save # for npm users

Setting up express & mongoose boilerplate

Create file app.js in root of your project directory. This is simple express & mongoose boilerplate you might wanna copy paste this.

const express = require('express'); const mongoose = require('mongoose'); const cors = require('cors'); mongoose.Promise = global.Promise; mongoose.connect(process.env.DB || 'mongodb://localhost:27017/bookstore', { useMongoClient: true, }); mongoose.connection.on('connected', () => console.log('Connected to mongo')); mongoose.connection.on('error', (e) => console.log(`Aw shoot mongo --> ${e}`)); const BookSchema = new mongoose.Schema({ title: String, author: String, price: Number }); mongoose.model('Book', BookSchema); const app = express(); app.use(cors('*')); const PORT = process.env.PORT || 8080; app.listen(PORT, () => { console.log(`UP on --> http://localhost:${PORT}`); });
Start you server by node app.js. On pointing your browser to http://localhost:8080 you should get Cannot get / as we have not defined any routes. You should install nodemon for live server reloading.

Defining our Schema

Our schema in apollo based client is essentially a giant string. We will defining the structure of our bookstore data.

Create a file named schema.js and the following.

module.exports = `
  type Query{
    getAllBooks: [Book]
    getBookById(id: String!): Book
  }
  type Mutation{
    postBook(title:String! author:String! price: Int!): Book!
    deleteBook(id:String!): Book
    updateBook(title:String! author:String! price: Int! id:String!): Book!
  }
  type Book{
    _id: String!
    title: String!
    author: String!
    price: Int!
  }
`
Looks familiar ha like JSON. There are two root type that I explained above. They are query and mutation. We define a query name in the key and what we want to return the value. You may be wondering how I used Book in there. 

Well we can group some properties to make a custom type. It consist of primitive type like String, Int , Bool, Float etc. The ! denotes that the property can't be null be resolved otherwise it will throw an error. It will make more sense when we will resolve the fields.

Resolving our fields

Now once we have defined structure of our data in the schema now we have to resolve that in some ways. In this example we are going to use mongoose to grab the data. The resolver takes three argument in the function. 

The first argument is called parent which defines the old resolved value of the field. It is useful if we want to manipulate the data after fetching it from the db. But it will not be required in this example. Second argument is called argument or values which are the values passed to the queries when it is called. 

See the getBooksById query above. The argument is the id defined the parenthesis. The third argument is called context. It is passed when we create our endpoint and can hold values like req.user, secrets etc.

So now create a resolver.js and add the following, 

const mongoose = require('mongoose');

const Book = mongoose.model('Book');

module.exports = {
  Query: {
    getAllBooks: async () => await Book.find(),
    getBookById: async (parent, args) => await Book.findById(args.id)
  },
  Mutation: {
    postBook: async (parent, args) => {
      const newBook = new Book(args);
      return await newBook.save();
    },
    deleteBook: async (parent, { id }) => { // You can destructure the args
      return await Book.findByIdAndRemove(id)
    },
    updateBook: async (parent, { id: _id, ...doc }) => {
      await Book.update({ _id }, doc);
      return { _id, ...doc }
    }
  }
}
As seen in the code resolver is giant object having Query and Mutations as the root key. Each query and mutation is defined in our schema earlier. They are essentially a function which return the data at last.

Making the Graphql endpoint

Add this code to make a Graphql endpoint. It is simple express middleware stuff. We are also going to add Graphiql which is a Graphql client to test API sort of postman if you are familiar with that.

const express = require('express');
const mongoose = require('mongoose');
const cors = require('cors');
+ const { makeExecutableSchema } = require('graphql-tools');
+ const { graphiqlExpress, graphqlExpress } = require('apollo-server-express');

mongoose.Promise = global.Promise;

mongoose.connect(process.env.DB || 'mongodb://localhost:27017/bookstore', {
  useMongoClient: true,
});

mongoose.connection.on('connected', () => console.log('Connected to mongo'));
mongoose.connection.on('error', (e) => console.log(`Aw shoot mongo --> ${e}`));

const BookSchema = new mongoose.Schema({
  title: String,
  author: String,
  price: Number
});

mongoose.model('Book', BookSchema);

+ const typeDefs = require('./schema');
+ const resolvers = require('./resolver');

const app = express();
app.use(cors('*'));

const PORT = process.env.PORT || 8080;


+ const schema = makeExecutableSchema({
+  typeDefs,
+  resolvers
+ });

+ app.use(
+  '/graphql',
+  express.json(),
+  graphqlExpress(() => ({
+    schema,
+  })),
+ );

+ app.use(
+  '/graphiql',
+  graphiqlExpress({
+    endpointURL: '/graphql',
+  }),
+ );

app.listen(PORT, () => {
  console.log(`UP on --> http://localhost:${PORT}`);
});

Test it using Graphiql

Go to http://localhost:8080/graphiql to to test your work.

Conclusion

Graphql is a amazing technology. See how easy is to implement a CRUD API. You should definitely use it your stack.
Scaling and Securing WebSockets with HAProxy - Explained

Scaling and Securing WebSockets with HAProxy - Explained

Scaling and Securing WebSockets with HAProxy - Explained
In this article, I will discuss Websockets and how to Scale and Secure WebSockets with HAProxy, Coding and many more.

What is WebSockets

Whenever a client tries to establish a connection from a web server, the First thing that browser does is that it sends the connection as “Upgrade” in a Request Header. When the Web Server receives an upgrade header, It tells the server that it wants to change the connection to WebSocket instead of HTTP.

Upgrade Header is used to confirm that the client is entitled to request an upgrade to WebSocket. The Upgrade general header allows the client to specify what additional communication protocols it supports and would like to use if the server finds it appropriate to switch protocols.

What is a Reverse Proxy

A reverse proxy accepts a request from a client, forwards it to a server that can fulfil it, and returns the server’s response to the client.

Scaling WebSockets

In the below illustration, we have 2 clients and 2 backends with WebSocket running. If Client 1 makes a request to reverse proxy by sending an upgrade packet then it basically forward to one of the backend servers. All Reverse Proxy does not support WebSocket. Only a few support such as NgINX, HAProxy etc. We can assume that the load-balancing algorithm is Round Robin so it is going to connect to SERVER1.

Now We have one TCP connection between Client & WebSocket and another TCP Connection between Reverse Proxy & WebSocket Server. Since this is WebSocket Protocol, so Reverse Proxy will act as Layer 4 Proxy. It will Stream Every Single Packet that you send from Client 1 will be sent to Server 1. There is a dedicated TCP Connection to Client 1. It will act as a 1:1 TCP Connection.

Similarly, Client 2 Make a request and the request is sent to WebSocket 2. The main issue is that If Client 1 wants to send a message to Client 2 then in that case since the connection is established between Client 1 and Server 1 so Server 2 is unaware of Server 1. To make aware of Sever 2 we need to use Redis. Redis helps in Passing events between Servers.

Coding

The server only has a list of clients who are connected to a particular server. If you want to pass messages from one server to another server then you must store data in the shared database like Redis between servers using Publisher/Subscriber framework like RabbitMQ. In this, I have configured HAProxy and Pass messages to all the connected clients.

I have taken 3 Server. 1 Master and 2 Slaves.

Master: 3.144.84.135 -> For HAProxy Configuration
Slave : 18.218.115.196 , 52.14.245.146 -> Run WebSockets

STEP 1: Install HAProxy and Configure with the below instructions in Master Server

frontend haproxynode
bind *:80
default_backend backendnodes

backend backendnodes
balance roundrobin
server server1 18.218.115.196:4000 check server server2 52.14.245.146:4000 check
STEP 2: Configure Slave Server with below Instruction. Create index.html file and replace IP with HAProxy IP.

<html lang="en">
    <head>
        <title>WebSocket Server 1</title>
    </head>
    <body>
        <h1>WebSocket Example Server 1</h1>

        <form>
            <label for="message">Message:</label><br />
            <input type="text" id="message" name="message" /><br />
            <input type="button" id="sendButton" value="Send" />
        </form>

        <div id="output"></div>

        <script type="text/javascript">
            window.onload = function() {
                // connect to the server
                let socket = new WebSocket("ws://3.144.84.135/ws/echo");
                socket.onopen = () => socket.send("Client connected!");

                // send a message to the server
                var sendButton = document.getElementById("sendButton");
                var message = document.getElementById("message");
                sendButton.onclick = () => {
                    socket.send(message.value);
                }

                // print a message from the server
                socket.onmessage = (evt) => {
                    var output = document.getElementById("output");
                    output.innerHTML += `<div>${evt.data}</div>`;
                }
            }
        </script>
    </body>
</html>
STEP 3: Configure Slave Server with the below code. Create an index.js file and run the server.

const express = require('express');
const app = express();
const path = require('path');
const expressWs = require('express-ws')(app);

// Serve web page HTML
app.get('/ws', (req, res) => {
    res.sendFile(path.join(__dirname + '/index.html'));
});

// WebSocket function
app.ws('/ws/echo', (ws, req) => {
    // receive a message from a client
    ws.on('message', msg => {
        console.log(msg);

        // broadcast message to all clients
        var wss = expressWs.getWss();
        wss.clients.forEach(client => client.send("Received: " + msg));
    })
});

app.listen(4000);

Secure WebSocket Connection

  • Enable Cors.
  • Restrict payload size.
  • Authenticate users before WS connection establishes.
  • Use SSL over websocket.

To learn more: visit secure your WebSocket connections.

Hope you like our Scaling and Securing WebSockets with HAProxy Blog. Please Subscribe to our blog for an upcoming Blog.
Rhino JS vs Node JS - Key differences and more - Explained

Rhino JS vs Node JS - Key differences and more - Explained

Rhino JS vs Node JS - Key differences and more - Explained
In this article, I will discuss the difference between Rhino JS vs Node JS. Even I am new to Rhino JS. I have read from different sources and have put them in this Blog. 

Rhino JS vs Node JS

Rhino is just a Javascript engine written in Java Language which can execute Javascript Code. It cannot be connected to the NPM ecosystem and there is no other package. Node.js is a set of runtime environments for JavaScript programs, which not only includes the implementation of the JavaScript core language provided by V8 but also includes a wealth of libraries. 

In Simple, Node JS is a standalone, events, asynchronous javascript environment based on V8 and can be used for building lightweight, real-time applications.

Code Example

Rhino JS

Below Example simply prints the argument,

function print() {
    for( var i = 0; i < arguments.length; i++ ) {
       var value = arguments[i];
       java.lang.System.out.print( "PRINT ARGUEMENT OUTPUT: "+value );
    }
    java.lang.System.out.println();
}
print("InfoTipsNews")

Node JS

Below Example simply prints the first argument,

console.log("PRINT ARGUEMENT OUTPUT: ",process.argv[2]);

So why is Rhino JS being discussed together with Node.js

Obviously, this comes from the need to use JavaScript programming on the server-side.
JVM-based language implementation is one of the popular choices for server-side programs, and Rhino is a JavaScript engine implemented in Java and running on the JVM. It can seamlessly use Java’s rich core libraries and third-party libraries, so there are many based on Rhino’s server-side JavaScript solution.

Node.js is based on the Google V8 JavaScript engine and implements core libraries such as I/O by itself. Later, it became popular to manage various third-party libraries based on NPM and became a popular emerging server-side JavaScript solution.

You can refer to this Wiki page to see what options are available for server-side JavaScript: Comparison of server-side JavaScript solutions.


Is Rhino JS compatible with Node.js

Not compatible. Rhino is a Javascript engine written in Java while Node JS is a server-side Javascript Environment using the V8 Engine. The two are different things. Compatible with Node.js means to provide the same event-based core library package, while Rhino is just a pure JavaScript engine and does not provide Node.js core library API.

The Two are different things and there is no direct competition and there is no compatibility or incompatibility.

Conclusion

Node isn’t built on top of the JVM so if you’re looking to interop with Java then Rhino is one of the few choices out there. Probably because Rhino is just an engine while Node JS is stuff on top of an engine. Rhino is more like V8 which powers Node JS.
Software Development Life Cycle SDLC - Phase by Phase

Software Development Life Cycle SDLC - Phase by Phase

Software Development Life Cycle SDLC - Phase by Phase

Software Development Life Cycle (SDLC Life Cycle)

SDLC Life cycle is the process of creating a software development structure. There are different stages in the SDLC, and each stage has its own different activities. It enables development teams to design, create and deliver high-quality products.

The requirement is transformed into the design, design is transformed into development, and development is transformed into testing; after testing, it is provided to the client.

The phases of SDLC life cycle are Requirement Phase, Design Phase, Implementation Phase, Testing Phase, Deployment/ Deliver Phase, Maintenance.

1. Requirement Phase

For development teams and project managers, this is the most critical stage in the software development life cycle. At this stage, the customer states requirements, specifications, expectations, and any other special requirements related to the product or software. All of these are collected by the business manager or project manager or analyst of the service provider company.

The requirements include how to use the product and who will use the product to determine the operating load. All the information collected from this stage is essential for developing products according to customer requirements.

2. Design Phase

The design phase includes a detailed analysis of the new software according to the requirements phase. This is a high-priority stage in the system development life cycle because the logical design of the system has been converted to a physical design. The output of the requirements phase is a collection of what is needed, and the design phase provides a way to achieve these requirements. 

At this stage, all necessary tools will be determined, such as Java, .NET, PHP and other programming languages, Oracle, MySQL and other databases, a combination of hardware and software to provide a software that can run on it without any problems Platform.

There are several techniques and tools, such as data flow diagrams, flowcharts, decision tables and decision trees, data dictionaries and structured dictionaries for describing system design.

3. Implementation Phase

After successfully completing the requirements and design phases, the next step is to implement the design into the development of the software system. In this phase, the work is divided into small parts, and the development team starts coding according to the design discussed in the previous phase and according to the customer’s needs discussed in the requirements phase to produce the desired result.

Front-end developers develop easy-to-use and attractive GUIs and necessary interfaces for interaction with back-end operations, and back-end developers perform back-end coding according to the required operations. All operations are carried out in accordance with the procedures and guidelines demonstrated by the project manager.

Since this is the coding phase, it takes the longest time and more targeted methods for developers in the software development life cycle.
 

4. Testing Phase

Testing is the last step to complete the software system. In this phase, after obtaining the developed GUI and back-end combination, it will be tested according to the requirements described in the requirements phase. The test determines whether the software actually gives results based on the requirements raised in the requirements phase. The development team made a test plan to start the test. 

The test plan includes all types of basic tests, such as integration tests, unit tests, acceptance tests, and system tests. Non-functional testing will also be performed at this stage.
If there are any defects in the software or fail to work as expected, the testing team will provide detailed information about the problem to the development team. 

If it is a valid defect or worthy of the resolution, it is repaired, the development team replaces it with a new defect, and verification is also required.

5. Deployment/ Deliver Phase

When the software test is completed and satisfactory results are obtained, and there are no remaining problems in the work of the software, the software will be delivered to the customer for use. After the customer receives the product, it is recommended that they conduct a Beta test (acceptance test) first. 

In the Beta test, customers can request any changes mentioned in the documentation that are not present in the software or make any other GUI changes to make it more user-friendly. In addition, if a customer encounters any type of defect while using the software; it will be notified to the development team of that particular software to solve the problem. 

If this is a serious problem, the development team will solve it in a short time; otherwise, if it is not too serious, it will wait for the next version. After all types of errors and changes were resolved, the software was finally deployed to the end-user.

6. Maintenance

The maintenance phase is the last persistent phase of the SDLC because it continues until the end of the software life cycle. When customers start to use the software, real problems arise, and these problems need to be resolved at that time. 

This stage also includes changes to the hardware and software to maintain its operational efficiency, such as improving its performance, enhancing safety functions, and proceeding in the coming time according to customer needs. 

This process of taking care of the product from time to time is called maintenance.

Software Development Life Cycle (SDLC) model

There are various software development models or methods:
  • Waterfall model
  • Spiral model
  • Verification and validation model
  • Prototype model
  • Hybrid model

Waterfall model

This is the first sequential linear model because the output of one stage is the input of the next stage. It is easy to understand and is used for a small project.

The stages of the waterfall model are as follows:
  • Requirement analysis
  • Feasibility study
  • Design
  • Coding
  • Testing
  • Installation
  • Maintenance

Spiral model

It is the best kit model for intermediate projects. It is also called loop and iterative model. As long as the modules depend on each other, we use this model. Here, we wisely develop the application model and then hand it over to the customer. The different stages of the spiral model are as follows:
  • Demand collection
  • Design
  • Coding
  • Test

Prototype model

From the time when the customer rejection rate was high in the early model, we chose this model because the customer rejection rate decreased. Moreover, it also allows us to prepare a sample (prototype) in the early stages of the process, we can show the sample (prototype) to the customer and get their approval, and then start working on the original project. 

The model refers to the operation of creating an application prototype. 

Verification and validation model

It is an extended version of the waterfall model. It will be implemented in two phases. In the first phase, we will perform the verification process, and when the application is ready, we will perform the verification process. 

In this model, the realization takes place in a V shape, which means the verification process is completed in the downward flow and the verification process is completed in the upward flow.

If you liked this post, Here is update news about smartphones.