How to Find Blogger RSS Feed URL - Easily (Blogspot)

How to Find Blogger RSS Feed URL - Easily (Blogspot)

How to Find Blogger RSS Feed URL - Easily (Blogspot)
RSS feed URL is a web location or a link which enables blog readers to subscribe to any blog they want to read. It allows readers to stay updated with the latest posts or articles of any blog they might want to read.

RSS feed URL contains XML data that includes titles, summaries, and links to the full posts. So whenever, the reader wants to read the full post by reading a glimpse of the post through summary they can easily read the whole post. Users can use various RSS feed readers like Follow.it, FeedBurner to subscribe to these feeds and receive notifications whenever a new content is published.

How to Find Blogger RSS Feed URL

To Blogger RSS feed URL can be little bit tricky. Finding RSS feed URL from WordPress site is pretty easy as there are quite a lot of plugins to do so. But I will show you how you can easily find RSS feed URL of your Blogger site. There are two way to find Blogger RSS feed URL. I will show you both of them.

Method 1: From Page Source

With this method you can directly find your Blogger site's RSS feed URL. Though, you need a computer for this, this method won't work on mobile devices. So follow these steps,

Step 1: Visit you site

Now open Google Chrome and enter your site URL and open your site.

Step 2: Right Click

Now, make a right click on you mouse while your site is open on the tab.

Step 3: View Page Source

Click on view page source from the menu appears upon right click. This will open the page source tab.

Step 4: Find Feed

From page source tab, click Ctrl+F on keyboard, this will help you to find your RSS feed URL link, so when the search tab open type "feed" in the search tab and click Enter. you'll find total 3 feed URLs, all of them can be used as a feed. Though only of them is RSS feed the others are atom feed URL, the one contains ?alt=rss attribute is the URL of RSS feed.

 

Method 2: From Your Domain

This method is indirect and the easiest. You can easily find RSS feed URL of any Blogger website with this method. You can use any device to find you Blogger site's RSS feed URL with this method. Follow the below steps,

Step 1: Find Your Domain URL

Find your domain URL, for example we are assuming your domain URL is https://example.com/.

Step 2: Add feeds/posts/default?alt=rss

Add "feeds/posts/default?alt=rss" after your domain URL ends. for example if your domain URL is https://example.com/ your RSS feed URL will be https://example.com/feeds/posts/default?alt=rss.
And your atom feed URL will be https://example.com/feeds/posts/default.

Redis Message Queue - Implementation - Commands

Redis Message Queue - Implementation - Commands

Redis Message Queue - Implementation - Commands
In this blog post, I will discuss the Redis Message Queue, Stream type of Redis. Redis has introduced a new data type called Stream type since 5.0, which is specially designed for message queues. There are many more message queues such as RabbitMQ, Kafka but here we will just discuss how to implement Redis as a message queue.

In a distributed system, when two components want to communicate based on a message queue, one component sends a message to the message queue, which we call a producer, and the other component consumes the message, and then processes it, which we call a consumer. As shown below:

The message queue is especially to clear up the trouble of processing inconsistent competencies among producers and consumers. It is often an indispensable middleware in large factories. Redis had a message queue function based on publisher and subscriber (pub/sub) before 5.0.

Redis has a disadvantage that when there is a Redis downtime, network disconnection, etc., messages get discarded. However, Redis Stream provides message persistence and master-slave replication functions, allowing any client to access the data at any time, and remember the location of each client’s access, and ensure that the message is not lost.

Redis Message Queue Commands

1. XADD

ADD infotipsnews * 1 hello
XADD is used to insert a message into the message queue(In the Current Example message queue name is infotipsnews). The key of the message is the 1 and the value is “hello”. The “*” after infotipsnews auto-generate globally unique ID. 

It is automatically generated for the inserted message, 1631288930852-0, the first half of 1631288930852 indicates the UNIX time in milliseconds of the server, and the second half of 0 is a message sequence number in order to distinguish messages which are delivered at the same time.

2. XTRIM

XTRIM infotipsnews maxlen 100
It is used to remove older entries from the message queue based on parameters such as MAXLEN or MINID. When the Stream reaches the maximum length, the old messages will be deleted. In the above example, if the stream reaches a maximum length of 5 then it will delete the old message. 

Due to the internal implementation mechanism of the stream, and accurate setting of an upper limit of length will consume more resources, so we generally adopt a fuzzy setting method: XTRIM infotipsnews maxlen ~ 5 , which means that the length can exceed 5, which can be 6, 9, etc., It is up to redis to determine when to truncate.

3. XLEN

It returns the number of entries inside the stream. In the above example, XLEN returns the length of the message queue i.e. infotipsnews.

4. XDEL

It is used to remove specific entries from the message queue. In the above example, the command Indicates to delete the message with ID 1631288930852-0 in the message queue infotipsnews.

5. XRANGE

XRANGE infotipsnews - +
It is used to read the message. “$” represents the latest message, and “block 10000” is blocking time in milliseconds i.e. 10s. In the above example, XREAD is reading a message, if no message arrives, XREAD will block for 10s and then return NIL. If a message arrives within 10s then the message is returned.

6. XREAD

XREAD block 10000 streams infotipsnews $
It is used to read the message. “$” represents the latest message, and “block 10000” is blocking time in milliseconds i.e. 10s. In the above example, XREAD is reading a message, if no message arrives, XREAD will block for 10s and then return NIL. If a message arrives within 10s then the message is returned.

7. XGROUP

XGROUP CREATE infotipsnews mygroup 0
XGROUP is used when you want to create New Consumer, Destroy a Consumer Group, Delete Specific Consumer etc. In the above example, I have created a consumer group mygroup for the message queue infotipsnews, 0 means to read from the very beginning position. 

In Order to Destroy the Consumer execute XGROUP DESTROY infotipsnews consumers.

8. XREADGROUP

XREADGROUP group mygroup consumer1 streams infotipsnews >
XREADGROUP is a special version of XREAD with the support of consumer groups. In the above example, consumer1 in the consumer group mygroup reads all messages from the message queue infotipsnews, where “>” means to start reading from the first unconsumed message. 

It should be noted that once the message in the message queue is consumed by a consumer in the consumer group, it can no longer be read by other consumers in the consumer group. The purpose of using consumer groups is to allow multiple consumers in the group to share and read messages. 

Therefore, we usually let each consumer read part of the message so that the message read load is evenly distributed among multiple consumers.

9. XPENDING

XPENDING infotipsnews mygroup
In order to ensure that consumers can still obtain unprocessed messages after a failure and restart, Streams will automatically use an internal queue to store the messages read by each consumer in the consumer group until the consumer uses the XACK command to notify Streams, The message has been processed. When the consumer restarts, you can use the XPENDING command to view the messages that have been read but have not been confirmed.

10. XACK

XACK infotipsnews mygroup 1631289246997-0
It means that the consumer group mygroup has confirmed that it has processed the message with id 1631289246997-0 in the test message queue.

So far, we have understood the usage of using the Stream type to implement message queues.

Why do we use Redis as a message queue

To use message queues, you should use special message queue middleware such as Kafka and RabbitMQ, and Redis is more suitable for caching. In fact, I think that the technology used is related to the application scenario you are currently encountering. 

If your message communication is not large and you are not sensitive to data loss, then using Redis as a message queue is a good way. After all, Redis is compared to Kafka. For professional messaging systems, it is more lightweight and has low maintenance costs.

Docker Compose file vs Dockerfile - Differences - Explained

Docker Compose file vs Dockerfile - Differences - Explained

Docker Compose file vs Dockerfile - Differences - Explained
The Dockerfile is used to build a custom image and does not directly generate a container. It’s just that you can run the container while running the image.
The container orchestration to deploy the environment is done using the docker-compose.yml file, which may require a Dockerfile.

Dockerfile is used to build a mirror. If you want to use this mirror, you need to use the docker run command to run the mirror to generate and run a container.

If you have multiple containers and each container is dependent on another container then with docker-compose we can link.

Dockerfile

Write each layer of modification, installation, construction, and operation commands into a script. Use this script to build and customize the image. This script is the Dockerfile.

Dockerfile part of the instructions:

// FROM Specify the base image
FROM nginx

// Excuting an order. Each RUN will generate a layer, and use && as much as possible for a requirement, so as to reduce the RUN, that is, reduce the layering 

RUN echo '<h1>Hello, Docker!</h1>' > /usr/share/nginx/html/index.html
RUN yum update && yum install -y vim python-dev

// COPY: Copy package.json in the source path to the mirror path /usr/src/app on the new layer
COPY package.json /usr/src/app/

// WORKDIR Specify the working directory. Specify the lower-level working directory as /data in the container, try to use an absolute directory
WORKDIR /data

//ADD can automatically decompress files. The following example finally hello under /data/test
ADD hello test/ 

// Copy is similar to ADD, except that files cannot be decompressed
COPY hello test/

// CMD Excuting an order
CMD ["node", "index.js"]

// ENV Set environment variables and define NAME=Happy Feet, then you can use $NAME to execute

ENV VERSION=1.0 DEBUG=on NAME="Happy Feet" // VOLUMES mount


// EXPOSE Port exposure

EXPOSE <Port 1> [<Port 2>...]

Dockerfile example


// 1、创建 Dockerfile
mkdir mynginx
cd mynginx
vim Dockerfile

// 2、输入以下内容并保存:
FROM nginx
RUN echo '<h1>Hello, Docker!</h1>' > /usr/share/nginx/html/index.html

// 在 Dockerfile 目录下执行,生成新的自定义 images
docker build -t nginx:v3 .

Docker-compose

docker-compose is an official open source project, responsible for the rapid orchestration of Docker container clusters and the deployment of distributed applications. Define a group of related application containers as a project through a single docker-compose.yml template file (YAML format)

Install docker-compose

It has been installed by default under Mac and Window. However, it must be installed manually under Linux. The binary package is used here

sudo curl -L https://github.com/docker/compose/releases/download/1.26.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

// Test docker-compose
$ docker-compose --version

General steps

1. Create an empty directory.
2. Define Dockerfile to facilitate migration to any place
3. Write docker-compose.yml file
4. Run docker-compose up to start the service

Examples of using docker-compose

Next, we use Node JS to build a website that can record the number of page visits.
1. Create an empty directory: mkdir -p /data/test
2. Create index.js under the empty file and enter the following. In createClient() you can apply a hostname that you have mentioned in docker-compose.yml

const express = require('express');
const app = express();
const redis  = require('redis');
app.get('/count',async function (req,res){
    const client = redis.createClient({host: 'redis-server'});
    return client.get('visit',(err, count)=>{
        count=Number(count)+2;
        return client.set(['visit',String(count)],function(err){
        return res.status(200).send(`Total Visit ${count}`);
        })
    });
})
app.get('/ping',(req,res)=>res.send("OK"));
app.listen(3001);
3. Write the Dockerfile file:

FROM node:14
ADD . /code
WORKDIR /code
RUN npm install express redis
COPY . ./code
CMD ["node", "index.js"]
4. Write the docker-compose.yml file

version: '3'
services: 
  redis-server:
    image: 'redis'
  web: 
    build: "."
    ports: 
       - "5001:3001"
    depends_on:
      - redis-server     
    links:
      - redis-server   
5. Execute the docker-compose project

docker-compose up

Description of yml template file:


version: '3'
services:
    phpfpm:
    image: yoogr/phpfpm:0.0.1
    container_name: ct-phpfpm
    build:
      context: .
      dockerfile: Dockerfile
    expose:
      - "9000"
    volumes:
      - ${DIR_WWW}:${DIR_WWW}:rw
      - ./conf/php/php.ini:/usr/local/etc/php/php.ini:ro
      - ./conf/php/php-fpm.d/www.conf:/usr/local/etc/php-fpm.d/www.conf:rw
      - ./conf/supervisor/conf.d:/etc/supervisor/conf.d/:ro
      - ./log/php-fpm/:/var/log/php-fpm/:rw
      - ./log/supervisor/:/var/log/supervisor/:rw
    command: supervisord -n
    links:
      - mysql:mysql
      - redis:redis
Each service represents a container. The container can be created through the image of dockerhub, or it can be created from the image built from the local Dockerfile. If a service needs to use the image built by Dockerfile, specify the text location and file name of the build-in docker-compose. The yml file can specify volume and network. 

The following is an example of using network and volumes parameters (placed in the same layer relationship with service):

version: '3.0'
services:

  wordpress:
    image: wordpress
    ports:
      - 8080:80
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_PASSWORD: examplepass
    network:
      - my-bridge

  db:
    image: mysql:5.7
    environment:
      MYSQL_DATABASE: wordpress
      MYSQL_ROOT_PASSWORD: 123456
    volumes:
      - mysql-data:/var/lib/mysql
    networks:
      - my-bridge

volumes:
  my-data

networks:
  my-bridge:
    driver:bridge
The definition of network and volume is similar to docker network create and docker volume create. If you do not specify the connection network, then link will be used. 

Hope You like our “Docker Compose file vs Dockerfile” post. Please subscribe to our blog for getting updates on email for the upcoming blogs.
Node Js Router (With Express.js) - Explained

Node Js Router (With Express.js) - Explained

Node Js Router (With Express.js) - Explained
Node Js Router module is used to create endpoints for applications for handling client requests. A route is a chunk of Express code that links an HTTP verb (GET, POST, PUT, DELETE, etc.) to a URL path/pattern and a function that handles that pattern.

Routes may be created in a variety of ways. We’ll be using the npm module express in this lesson. Router middleware is useful since it allows us to aggregate route handlers for a certain section of a website and accesses them using a single route prefix. We’ll store all of our library-related routes in a “catalogue” module, and we’ll maintain them separated if we add routes for managing user accounts or other tasks.

Node Js Router module

In a module called Login.js, we first construct Login routes. The code imports the Express application object uses it to acquire a Router object, and then uses the get() function to add a handful of routes to it. Finally, the Router object is exported by the module.


var express = require('express');
var router = express.Router();

// Login page.
router.get('/', function (req, res) {
  res.send('Login page');
})

// Register Page 
router.get('/about', function (req, res) {
  res.send('You can view Dashboard');
})

module.exports = router;
We call an instance of express.router(). we use those routes to view the Login page as well as Dashboard. we can access the Login page at https://localhost:3001/ and Dashboard at https://localhost:3001/about,


var Login = require('./Login.js');
// ...
app.use('/Login', Login);
In order to utilize the router module in our main app code, we must first require() it (Login.js). The Router is then added to the middleware processing path by using use() on the Express application with a URL path of ‘Login’.

Now we set a middleware as Login.js. Now our URL has been changed https://localhost:3001/Login/ and https://localhost:3001/Login/about.

We can create such routes more and then apply them to our application, this is a very powerful feature. A Router could be used for basic routes, authenticated routes, and API routes.

By generating numerous instances of the Router and applying them to our apps, we can make them more modular and versatile than ever before. We’ll now look at how to process requests with middleware.

Route Paths

The endpoints at which requests can be made are defined by the route routes. ‘/’, ‘/about’, ‘/book’, and ‘/any-random.path’ is the only strings we’ve seen so far, and they’re utilized precisely as stated.

String patterns can also be used as route routes. To create patterns of endpoints that will be matched, string patterns utilize a type of regular expression syntax. The syntax is as follows (note that string-based paths interpret the hyphen (-) and the dot (.) literally):

+: The endpoint must contain one or more of the previous characters (or groups), for example, a route path of ‘/ab+cd’ will match abcd, abbcd, abbbcd, and so on.

(): ‘/ab(cd)?e’ performs a?-match on the group (cd)—it matches abe and abcde.

?: The previous character (or group) must be 0 or 1, e.g., a route path of ‘/ab?cd’ will match endpoints acd or abcd

*: The * character may be placed anywhere in the endpoint’s string. Endpoints abcd, abXcd, abSOMErandomTEXTcd, and so on will be matched by a route path of ‘/ab*cd’, for example.

Conclusion

We now have more flexibility than ever before in creating our routes thanks to the integration of the Express 4.0 Router. To summarise, we can:
  • To define groups of routes, call express.Router() multiple times.
  • Use app.use() to apply express.Router() to a portion of our site and handle requests with route middleware.
  • To validate parameters using .param, use route middleware ()
  • To define numerous requests on a route, use app.route() as a shortcut to the Router.

With all of the different methods, we can create routes, I’m confident that our applications will improve in the future. If you have any questions or recommendations, please leave them in the comments section.
How to make a Graphql API in Express Framework - Easily

How to make a Graphql API in Express Framework - Easily

How to make a Graphql API in Express Framework - Easily
Graphql is booming up! It is growing very fast. Even I am using it in almost all of my projects and I am loving the design pattern. In this post I am going to show you how you can make a Graphql API using express(popular nodejs web framework) and will be using popular Apollo Graphql tools to make the thing happen.

What is Graphql

Graphql is sort of SQL like replacement for RESTful APIs. In Graphql you make queries instead of endpoints and ask your server to provide only the parts of the data you want. There is only one endpoint exposed in Graphql that handles all sort of things. 

Official definition(from docs):-GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. In Graphql you define you data structure in a schema sort of like you define tables in a SQL database which you can manipulate using the operations provided the Graphql.

There are mainly three types of operations in Graphql. Instead of a GET request you make a query. In place of POST, PUT or DELETE request you make a mutation. As name suggests a mutation in Graphql is the operation which modifies the data on the server. 

The third type is called subscriptions which is used to make real time connection through sockets. It uses a pubsub mechanism. These three are also called root schema types.

While building an API for handling a Bookstore in that first we define the schema using different types and then resolve the types by fetching the data from a source like a database(MongoDB or postgres maybe). You may even use a existing RESTFul API as a source and wrap it in a Graphql API.

Prerequisites for the tutorial

As usual you will need:
  • Node js (I recommend getting version 8 LTS but dont use less then 7.6).
  • MongoDB (Although you can use any other DB just use you resolution logic).
  • I recommend having yarn installed.
  • A Good browser.
  • I also recommend a Graphql plugin for you text editor. VSCode has some.

Install Dependencies

As expected we will start by bloating our holy node_modules folder with dependencies.

$ yarn init -y # or npm init -y
$ yarn add express cors graphql graphql-tools apollo-server-express mongoose # or
$ npm i express cors graphql graphql-tools apollo-server-express mongoose --save # for npm users

Setting up express & mongoose boilerplate

Create file app.js in root of your project directory. This is simple express & mongoose boilerplate you might wanna copy paste this.

const express = require('express'); const mongoose = require('mongoose'); const cors = require('cors'); mongoose.Promise = global.Promise; mongoose.connect(process.env.DB || 'mongodb://localhost:27017/bookstore', { useMongoClient: true, }); mongoose.connection.on('connected', () => console.log('Connected to mongo')); mongoose.connection.on('error', (e) => console.log(`Aw shoot mongo --> ${e}`)); const BookSchema = new mongoose.Schema({ title: String, author: String, price: Number }); mongoose.model('Book', BookSchema); const app = express(); app.use(cors('*')); const PORT = process.env.PORT || 8080; app.listen(PORT, () => { console.log(`UP on --> http://localhost:${PORT}`); });
Start you server by node app.js. On pointing your browser to http://localhost:8080 you should get Cannot get / as we have not defined any routes. You should install nodemon for live server reloading.

Defining our Schema

Our schema in apollo based client is essentially a giant string. We will defining the structure of our bookstore data.

Create a file named schema.js and the following.

module.exports = `
  type Query{
    getAllBooks: [Book]
    getBookById(id: String!): Book
  }
  type Mutation{
    postBook(title:String! author:String! price: Int!): Book!
    deleteBook(id:String!): Book
    updateBook(title:String! author:String! price: Int! id:String!): Book!
  }
  type Book{
    _id: String!
    title: String!
    author: String!
    price: Int!
  }
`
Looks familiar ha like JSON. There are two root type that I explained above. They are query and mutation. We define a query name in the key and what we want to return the value. You may be wondering how I used Book in there. 

Well we can group some properties to make a custom type. It consist of primitive type like String, Int , Bool, Float etc. The ! denotes that the property can't be null be resolved otherwise it will throw an error. It will make more sense when we will resolve the fields.

Resolving our fields

Now once we have defined structure of our data in the schema now we have to resolve that in some ways. In this example we are going to use mongoose to grab the data. The resolver takes three argument in the function. 

The first argument is called parent which defines the old resolved value of the field. It is useful if we want to manipulate the data after fetching it from the db. But it will not be required in this example. Second argument is called argument or values which are the values passed to the queries when it is called. 

See the getBooksById query above. The argument is the id defined the parenthesis. The third argument is called context. It is passed when we create our endpoint and can hold values like req.user, secrets etc.

So now create a resolver.js and add the following, 

const mongoose = require('mongoose');

const Book = mongoose.model('Book');

module.exports = {
  Query: {
    getAllBooks: async () => await Book.find(),
    getBookById: async (parent, args) => await Book.findById(args.id)
  },
  Mutation: {
    postBook: async (parent, args) => {
      const newBook = new Book(args);
      return await newBook.save();
    },
    deleteBook: async (parent, { id }) => { // You can destructure the args
      return await Book.findByIdAndRemove(id)
    },
    updateBook: async (parent, { id: _id, ...doc }) => {
      await Book.update({ _id }, doc);
      return { _id, ...doc }
    }
  }
}
As seen in the code resolver is giant object having Query and Mutations as the root key. Each query and mutation is defined in our schema earlier. They are essentially a function which return the data at last.

Making the Graphql endpoint

Add this code to make a Graphql endpoint. It is simple express middleware stuff. We are also going to add Graphiql which is a Graphql client to test API sort of postman if you are familiar with that.

const express = require('express');
const mongoose = require('mongoose');
const cors = require('cors');
+ const { makeExecutableSchema } = require('graphql-tools');
+ const { graphiqlExpress, graphqlExpress } = require('apollo-server-express');

mongoose.Promise = global.Promise;

mongoose.connect(process.env.DB || 'mongodb://localhost:27017/bookstore', {
  useMongoClient: true,
});

mongoose.connection.on('connected', () => console.log('Connected to mongo'));
mongoose.connection.on('error', (e) => console.log(`Aw shoot mongo --> ${e}`));

const BookSchema = new mongoose.Schema({
  title: String,
  author: String,
  price: Number
});

mongoose.model('Book', BookSchema);

+ const typeDefs = require('./schema');
+ const resolvers = require('./resolver');

const app = express();
app.use(cors('*'));

const PORT = process.env.PORT || 8080;


+ const schema = makeExecutableSchema({
+  typeDefs,
+  resolvers
+ });

+ app.use(
+  '/graphql',
+  express.json(),
+  graphqlExpress(() => ({
+    schema,
+  })),
+ );

+ app.use(
+  '/graphiql',
+  graphiqlExpress({
+    endpointURL: '/graphql',
+  }),
+ );

app.listen(PORT, () => {
  console.log(`UP on --> http://localhost:${PORT}`);
});

Test it using Graphiql

Go to http://localhost:8080/graphiql to to test your work.

Conclusion

Graphql is a amazing technology. See how easy is to implement a CRUD API. You should definitely use it your stack.
Top 10 SEO Content Writing tips Everyone Should Know

Top 10 SEO Content Writing tips Everyone Should Know

Top 10 SEO Content Writing tips Everyone Should Know
Almost everyone focuses on factors other than SEO-optimized content. What they don't talk about is that "Content is King," and Google and other search engines prioritize content more than other off-page SEO factors. Google's recent algorithmic updates enable SEO to revolve around content.

In this article, I will explain 10 tips for SEO-optimized content writing.

Here are ten of the key concepts to keep in mind while writing contents for SEO:


Content matters

Like I said, it’s important to remember that no matter what your subject matter is, your content has the potential to reach a large audience and help your company beyond the SEO benefits. Website content, articles and blog posts are ways to connect your company to relevant, interesting information about your industry. 

Creating content that is well-written and focuses on interesting topics in your industry will help it go beyond increasing your page rank by being linked to and shared by outsiders.

Stay on topic

Writing content optimized for SEO is not the same as writing your personal blog! There’s nothing wrong with adding some personality to your writing - this helps to keep the readers intrigued. Just make sure that you stay on the topic you are writing about and avoid drifting into unrelated areas.

Write compelling headings 

The titles of web pages and articles are the important things to focus while writing for SEO. Don’t use a generic, boring headline which simply describes what the article or web copy is about. Instead, think like a newspaper headline writer and develop a headline which makes someone want to read more. 

According to a study, Google rank those pages on top which has a higher click through rate than the others. And higher click through rate can be achieved only by attention grabbing titles.

Ideal keyword length and density

Content written for articles should be 300 to 1,000 words in length. Blog posting can be shorter. The length of your article will largely be determined by the number of keywords you are using: a keyword for every 50 to 100 words of content is a good rule of thumb.

Add keywords to the bio

Biographical information given for a magazine story or an article for a directory submission is a great chance for you to enhance your credentials and also optimize your content. Make sure that you include one keyword in your bio.

Don’t stack keywords

Cramming keywords in an article or page of web content (such as being part of a list) looks unprofessional since all of your linked keywords will be running together on the page. Spread your keywords evenly throughout the article so they look more organic.

Use meta descriptions

Meta descriptions are 150 character descriptions of your content and are excellent opportunities for optimization. Make sure your content has meta descriptions with keywords included. This is a great way to get Google to recognize your content and increase its value.

Write and tag hierarchically

In order for Google and other search engines to give your content the highest placement possible, it’s important that it looks professional and well-structured. Using the appropriate tags for your content will do this. Use h1 tags for titles, h2 tags for subtitles and so on. 

This concept will also help your writing by encouraging you to put your most compelling ideas first.

Write original content

Having duplicate content is a major no-no for search engine optimization, so don’t just “copy and paste” content from existing sources. Make sure your content is fresh and original - even copying your own content is a poor SEO decision.

Choose relevant images

Creative Commons search tools like the Google Image Search Assistant will help you find intriguing images that are free to use without risking copyright infringement.