How to Find Blogger RSS Feed URL - Easily (Blogspot)

How to Find Blogger RSS Feed URL - Easily (Blogspot)

How to Find Blogger RSS Feed URL - Easily (Blogspot)
RSS feed URL is a web location or a link which enables blog readers to subscribe to any blog they want to read. It allows readers to stay updated with the latest posts or articles of any blog they might want to read.

RSS feed URL contains XML data that includes titles, summaries, and links to the full posts. So whenever, the reader wants to read the full post by reading a glimpse of the post through summary they can easily read the whole post. Users can use various RSS feed readers like Follow.it, FeedBurner to subscribe to these feeds and receive notifications whenever a new content is published.

How to Find Blogger RSS Feed URL

To Blogger RSS feed URL can be little bit tricky. Finding RSS feed URL from WordPress site is pretty easy as there are quite a lot of plugins to do so. But I will show you how you can easily find RSS feed URL of your Blogger site. There are two way to find Blogger RSS feed URL. I will show you both of them.

Method 1: From Page Source

With this method you can directly find your Blogger site's RSS feed URL. Though, you need a computer for this, this method won't work on mobile devices. So follow these steps,

Step 1: Visit you site

Now open Google Chrome and enter your site URL and open your site.

Step 2: Right Click

Now, make a right click on you mouse while your site is open on the tab.

Step 3: View Page Source

Click on view page source from the menu appears upon right click. This will open the page source tab.

Step 4: Find Feed

From page source tab, click Ctrl+F on keyboard, this will help you to find your RSS feed URL link, so when the search tab open type "feed" in the search tab and click Enter. you'll find total 3 feed URLs, all of them can be used as a feed. Though only of them is RSS feed the others are atom feed URL, the one contains ?alt=rss attribute is the URL of RSS feed.

 

Method 2: From Your Domain

This method is indirect and the easiest. You can easily find RSS feed URL of any Blogger website with this method. You can use any device to find you Blogger site's RSS feed URL with this method. Follow the below steps,

Step 1: Find Your Domain URL

Find your domain URL, for example we are assuming your domain URL is https://example.com/.

Step 2: Add feeds/posts/default?alt=rss

Add "feeds/posts/default?alt=rss" after your domain URL ends. for example if your domain URL is https://example.com/ your RSS feed URL will be https://example.com/feeds/posts/default?alt=rss.
And your atom feed URL will be https://example.com/feeds/posts/default.

How to make a Graphql API in Express Framework - Easily

How to make a Graphql API in Express Framework - Easily

How to make a Graphql API in Express Framework - Easily
Graphql is booming up! It is growing very fast. Even I am using it in almost all of my projects and I am loving the design pattern. In this post I am going to show you how you can make a Graphql API using express(popular nodejs web framework) and will be using popular Apollo Graphql tools to make the thing happen.

What is Graphql

Graphql is sort of SQL like replacement for RESTful APIs. In Graphql you make queries instead of endpoints and ask your server to provide only the parts of the data you want. There is only one endpoint exposed in Graphql that handles all sort of things. 

Official definition(from docs):-GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. In Graphql you define you data structure in a schema sort of like you define tables in a SQL database which you can manipulate using the operations provided the Graphql.

There are mainly three types of operations in Graphql. Instead of a GET request you make a query. In place of POST, PUT or DELETE request you make a mutation. As name suggests a mutation in Graphql is the operation which modifies the data on the server. 

The third type is called subscriptions which is used to make real time connection through sockets. It uses a pubsub mechanism. These three are also called root schema types.

While building an API for handling a Bookstore in that first we define the schema using different types and then resolve the types by fetching the data from a source like a database(MongoDB or postgres maybe). You may even use a existing RESTFul API as a source and wrap it in a Graphql API.

Prerequisites for the tutorial

As usual you will need:
  • Node js (I recommend getting version 8 LTS but dont use less then 7.6).
  • MongoDB (Although you can use any other DB just use you resolution logic).
  • I recommend having yarn installed.
  • A Good browser.
  • I also recommend a Graphql plugin for you text editor. VSCode has some.

Install Dependencies

As expected we will start by bloating our holy node_modules folder with dependencies.

$ yarn init -y # or npm init -y
$ yarn add express cors graphql graphql-tools apollo-server-express mongoose # or
$ npm i express cors graphql graphql-tools apollo-server-express mongoose --save # for npm users

Setting up express & mongoose boilerplate

Create file app.js in root of your project directory. This is simple express & mongoose boilerplate you might wanna copy paste this.

const express = require('express'); const mongoose = require('mongoose'); const cors = require('cors'); mongoose.Promise = global.Promise; mongoose.connect(process.env.DB || 'mongodb://localhost:27017/bookstore', { useMongoClient: true, }); mongoose.connection.on('connected', () => console.log('Connected to mongo')); mongoose.connection.on('error', (e) => console.log(`Aw shoot mongo --> ${e}`)); const BookSchema = new mongoose.Schema({ title: String, author: String, price: Number }); mongoose.model('Book', BookSchema); const app = express(); app.use(cors('*')); const PORT = process.env.PORT || 8080; app.listen(PORT, () => { console.log(`UP on --> http://localhost:${PORT}`); });
Start you server by node app.js. On pointing your browser to http://localhost:8080 you should get Cannot get / as we have not defined any routes. You should install nodemon for live server reloading.

Defining our Schema

Our schema in apollo based client is essentially a giant string. We will defining the structure of our bookstore data.

Create a file named schema.js and the following.

module.exports = `
  type Query{
    getAllBooks: [Book]
    getBookById(id: String!): Book
  }
  type Mutation{
    postBook(title:String! author:String! price: Int!): Book!
    deleteBook(id:String!): Book
    updateBook(title:String! author:String! price: Int! id:String!): Book!
  }
  type Book{
    _id: String!
    title: String!
    author: String!
    price: Int!
  }
`
Looks familiar ha like JSON. There are two root type that I explained above. They are query and mutation. We define a query name in the key and what we want to return the value. You may be wondering how I used Book in there. 

Well we can group some properties to make a custom type. It consist of primitive type like String, Int , Bool, Float etc. The ! denotes that the property can't be null be resolved otherwise it will throw an error. It will make more sense when we will resolve the fields.

Resolving our fields

Now once we have defined structure of our data in the schema now we have to resolve that in some ways. In this example we are going to use mongoose to grab the data. The resolver takes three argument in the function. 

The first argument is called parent which defines the old resolved value of the field. It is useful if we want to manipulate the data after fetching it from the db. But it will not be required in this example. Second argument is called argument or values which are the values passed to the queries when it is called. 

See the getBooksById query above. The argument is the id defined the parenthesis. The third argument is called context. It is passed when we create our endpoint and can hold values like req.user, secrets etc.

So now create a resolver.js and add the following, 

const mongoose = require('mongoose');

const Book = mongoose.model('Book');

module.exports = {
  Query: {
    getAllBooks: async () => await Book.find(),
    getBookById: async (parent, args) => await Book.findById(args.id)
  },
  Mutation: {
    postBook: async (parent, args) => {
      const newBook = new Book(args);
      return await newBook.save();
    },
    deleteBook: async (parent, { id }) => { // You can destructure the args
      return await Book.findByIdAndRemove(id)
    },
    updateBook: async (parent, { id: _id, ...doc }) => {
      await Book.update({ _id }, doc);
      return { _id, ...doc }
    }
  }
}
As seen in the code resolver is giant object having Query and Mutations as the root key. Each query and mutation is defined in our schema earlier. They are essentially a function which return the data at last.

Making the Graphql endpoint

Add this code to make a Graphql endpoint. It is simple express middleware stuff. We are also going to add Graphiql which is a Graphql client to test API sort of postman if you are familiar with that.

const express = require('express');
const mongoose = require('mongoose');
const cors = require('cors');
+ const { makeExecutableSchema } = require('graphql-tools');
+ const { graphiqlExpress, graphqlExpress } = require('apollo-server-express');

mongoose.Promise = global.Promise;

mongoose.connect(process.env.DB || 'mongodb://localhost:27017/bookstore', {
  useMongoClient: true,
});

mongoose.connection.on('connected', () => console.log('Connected to mongo'));
mongoose.connection.on('error', (e) => console.log(`Aw shoot mongo --> ${e}`));

const BookSchema = new mongoose.Schema({
  title: String,
  author: String,
  price: Number
});

mongoose.model('Book', BookSchema);

+ const typeDefs = require('./schema');
+ const resolvers = require('./resolver');

const app = express();
app.use(cors('*'));

const PORT = process.env.PORT || 8080;


+ const schema = makeExecutableSchema({
+  typeDefs,
+  resolvers
+ });

+ app.use(
+  '/graphql',
+  express.json(),
+  graphqlExpress(() => ({
+    schema,
+  })),
+ );

+ app.use(
+  '/graphiql',
+  graphiqlExpress({
+    endpointURL: '/graphql',
+  }),
+ );

app.listen(PORT, () => {
  console.log(`UP on --> http://localhost:${PORT}`);
});

Test it using Graphiql

Go to http://localhost:8080/graphiql to to test your work.

Conclusion

Graphql is a amazing technology. See how easy is to implement a CRUD API. You should definitely use it your stack.
How to Get Ranked for Your Website Name - Easy and Fast

How to Get Ranked for Your Website Name - Easy and Fast

How to Get Ranked for Your Website Name - Easy and Fast
There’s a lot of advice out there on how to get ranked for some competitive keyword terms (like weight loss, credit cards and so on). What about ranking first for your websites name (if you have a new site)? If you have a domain named 2ndwebdesigner.com, that doesn’t mean you’ll be #1 for ’2ndwebdesigner’ or ’2nd web designer’ immediately! Why? 

We will try to explain that also we will explain how you can get ranked for your website name.

Put in rough terms, Google ranks websites using 2 criteria:
  • Relevancy
  • Authority

Relevancy

Example: If you try to rank #1 for ‘wedding flowers’ and have a page on your site for that term, Google will try to use several criteria to determine how relevant your page is for people that use that keyword ‘wedding flowers’ (or variations) and rank you accordingly.

Nobody knows all the factors Google uses to determine how relevant a page is for a specific keyword. The best type of evidence we got so far is 1) correlation evidence 2) making your own index of the web and trying to match it with Google’ index. A lot of resources are required to do both of those things (dedicated servers etc.)

Fortunately, for us, there are a few good companies that do this like Moz who constantly find correlation data and make various compartments. Their latest on relevancy is the LDA. It’s a very complex algorithm and it is suspected Google uses some variation of it in order to determine how relevant a page is for a specific keyword.

But in your case, you just want to get ranked for your website name. So the fact that your URL (2ndwebdesigner.com in this example) matches the keyword ’2ndwebdesigner or ’2nd web designer’ tells Google that your homepage is EXTREMELY relevant for those keywords Put in simple terms, a navigational queries is a keyword you type when you’re looking for a specific site, like Microsoft, or 800 Flowers.

Authority

This is where you need to focus on if you want to rank for your websites name. The primary reason why Google is not ranking you for your name is because you have no (or very little) links. Authority = links, put in rough terms.

Now, I wish things were that simple :) Authority = links, so go and get more links, right? Yes and no.

Yes, you need to get more links but if all of those links are from 1 unique root domain*, then your ‘authority’ is limited.

*unique root domain = a domain from 1 root URL. For example, if you get a link from a page on yahoo.com, then that’s 1 unique root domain. If you get a page from news.yahoo.com (a subdomain) that still counts as 1 root domain. If you get links from a page on cnn.com, bbc.com and news.cnn.com that’s 2 unique root domains.

Okay, why do links from unique root domains matter? Because search engines most probably use them as one of the major ranking factors. It is one of the hardest things to manipulate. You can easily get 300 links from 1 domain, but what about 300 links from 300 domains? It would take 10x, maybe 100x more time. Speaking generally, the harder a link is to get, the more valuable it is.

How to build ‘authority’ in the eyes of the search engines

We now know that we need links from a diverse number of domains in order to build authority in the eyes of the search engines. Now the hard part comes…where to get links from? To get ranked for your websites name, you can get started by submitting your site on some simple social bookmarking sites.

You can submit your site on,
There are many more…but I found these are a) reputable b) high authority.

Do a guest post

Know someone who owns a blog? Ask him for a guest post and add a link to your site at the end of the article. This should help a lot, especially if your friend’s site has some popularity. You can also ask strangers who have blogs to do a guest post for them, nowadays writing guest posts is a very hot topic.

Explore more link building methods

There are so many link building methods out there, it’s impossible to explore them in one step.

Please avoid doing black hat methods at any cost. Especially at the beginning. By black hat I mean doing things like buying links.

I would define buying links as DIRECTLY exchanging money for a link without any editorial control in the process. Buying link building services is NOT buying links because a) there’s an editorial control to check whether your site is family-friendly and b) you’re not exchanging money for links directly, you’re exchanging your money for some other persons’ TIME. 

That person will then use his time to go to websites that provide links for free and get them. You could easily obtain those links as well without any money investment.

Start lowly

If you’re not doing anything black hat (like using a software tool to blast 100.000 links in a day, which is obviously a way to game the search engines), you should be fine. I’m pretty careful with any new site I build and try to rank it first for its NAME. 

I’ve seen so many people who haven’t ranked their sites for their names and they go for phrases like ‘coding blog’ or whatever. In our example, you first want to be #1 for ’2ndwebdesigner’ for ’2ndwebdesigner’ and ’2nd web designer’ before going for terms like ‘webdesign blog’ or a ‘design blog’.

What about getting those links naturally?

You can do that as well! (actually search engines prefer you get natural links above all other types). If your site/page gets popular on Digg, StumbleUpon then you’ll get a bunch of natural links and you should rank for your websites name pretty fast. 

So that is an option too. Although I’m noticing that people who link naturally are migrating to Facebook/Twitter and search engines haven’t catched up yet (they need to give more weight to links from different profiles from Facebook and Twitter).