HTTP 302 Found
The HyperText Transfer Protocol (HTTP) 302 Found redirect status response code indicates that the resource requested has been temporarily moved to the URL given by the Location header.
// pages/404.jsexport default function Custom404() { return <h1>404 - Page Not Found</h1>;}
410
HTTP 410 Gone
Target resource is no longer available at the origin server and that this condition is likely to be permanent.
Used for content that are removed, e.g. threads deleted by user, blog post removed from site.
The HTTP 503 Service Unavailable server error response code indicates that the server is not ready to handle the request.
Used when website is down and predicted that it will be down by an extended period of time. This prevents losing rankings on a long-term basis.
robots.txt
See robots.txt.
robots.txt specifies which routes can be crawled and which are not.
In Next.js, add robots.txt to public folder.
// robots.txt
# Block all crawlers for /accounts
User-agent: *
Disallow: /accounts
# Allow all crawlers
User-agent: *
Allow: /
File available at http://<host>:<port>/robots.txt.
As the name implies, it’s a map of your site, telling search engine how to crawl.
Sitemaps are the easiest way to communicate with Google. They indicate the URLs that belong to your website and when they update so that Google can easily detect new content and crawl your website more efficiently.
Your site is really large. As a result, it’s more likely Google web crawlers might overlook crawling some of your new or recently updated pages.
Your site has a large archive of content pages that are isolated or not well linked to each other. If your site pages don’t naturally reference each other, you can list them in a sitemap to ensure that Google doesn’t overlook some of your pages.
Your site is new and has few external links to it. Googlebot and other web crawlers navigate the web by following links from one page to another. As a result, Google might not discover your pages if no other sites link to them.
Your site has a lot of rich media content (video, images) or is shown in Google News. If provided, Google can take additional information from sitemaps into account for search, where appropriate.
Sitemaps are not necessary, but still strongly recommended for better performance.
Always try to make sitemaps dynamic as new content is added.
Generate dynamic sitemaps with getServerSideProps.
Make file pages/sitemap.xml.js.
// pages/sitemap.xml.jsconst EXTERNAL_DATA_URL = "https://jsonplaceholder.typicode.com/posts";function generateSiteMap(posts) { return `<?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <!--We manually set the two URLs we know already--> <url> <loc>https://jsonplaceholder.typicode.com</loc> </url> <url> <loc>https://jsonplaceholder.typicode.com/guide</loc> </url> ${posts .map(({ id }) => { return ` <url> <loc>${`${EXTERNAL_DATA_URL}/${id}`}</loc> </url> `; }) .join("")} </urlset> `;}function SiteMap() { // getServerSideProps will do the heavy lifting}export async function getServerSideProps({ res }) { // We make an API call to gather the URLs for our site const request = await fetch(EXTERNAL_DATA_URL); const posts = await request.json(); // We generate the XML sitemap with the posts data const sitemap = generateSiteMap(posts); res.setHeader("Content-Type", "text/xml"); // we send the XML to the browser res.write(sitemap); res.end(); return { props: {}, };}export default SiteMap;
Meta robot tags are directives that search engines will always respect. Adding these robots tags can make the indexation of your website easier.
Meta robots tags orrobots.txt files are directives and will always be obeyed. Canonical tags are recommendations that Google can decide to obey or not.
<meta name="robots" content="noindex,nofollow" />
The robots tag is very common, by default, it has value index,follow, all is also valid.
noindex
Not show this page in search results.
Omitting noindex will nidicate the page can be indexed and shown in search results.
Sample Use Cases: Settings, policies, internal search pages.
nofollow
To not follow links on this page.
Omitting this will allow robots to crawl and follow links on this page. Links found on other pages may enable crawling, so if link A appears in pagesX and Y, and X has a nofollow robots tag, but Y doesn’t, Google may decide to crawl the link.
When users search for your site, Google Search results sometimes display a search box specific to your site, along with other direct links to your site. This tag tells Google not to show the sitelinks search box.
<meta name="google" content="notranslate" />
Ask google not to provide a translation for the page.
import Head from "next/head";function IndexPage() { return ( <div> <Head> <title>Meta Tag Example</title> <meta name="google" content="nositelinkssearchbox" key="sitelinks" /> <meta name="google" content="notranslate" key="notranslate" /> </Head> <p>Here we show some meta tags off!</p> </div> );}export default IndexPage;
A canonical URL is the URL of the page that search engines think is most representative from a set of duplicate pages on your site.
If Google finds several URLs that have the same content, it might decide to demote them in search results because they can be considered duplicated.
This also happens across domains, if you run two different websites and post the same content in each one, search engines can decide to pick one of them to be ranked, or directly demote both.
This is where canonical tags are extremely useful. They let Google know which URLs are the original source of truth and which are duplicated. Lots of duplicated pages across same or different domains can lead to bad rankings or even penalizations.
Both are valid, working URLs, but we use canonical to prevent the detection of duplicate content that we own. If we decided that https://example.com/products/phone should be considered for rankings, we would create a canonical tag:
The most important thing for SEO is that page data and metadata is available on page load without JavaScript.
SSG (Static Site Generation)
SSR (Server-Side Rendering)
ISR (Incremental Static Regeneration)
When there is a large amount of pages taking too much time to generate at build time. Next.js allows you to create or update static pages after you have built your site.
Incremental Static Regeneration enables developers and content editors to use static generation on a per-page basis, without needing to rebuild the entire site. With ISR, you can retain the benefits of static while scaling to millions of pages
Instead of optimizing images at build time, Next.js optimizes images on-demand as users request them. Unlike static site generators and static-only solutions, build times don’t increase, whether shipping ten images or ten million images.
Lazy Loaded Images
Images are lazy loaded by default. Page speed won’t be penalized for images housed outside of the viewport. Images only load when they come into view.
Avoids CLS
Images are always rendered to avoid Cumulative Layout Shift (CLS).
Sample Code
import Image from "next/image";return <Image src="" alt="" width={1920} height={1080} />;
Dynamic Imports
Tutorial 1Tutorial 2
Goal: reduce the amount of JavaScript loaded during initial page load from third-party libraries.
import dynamic from "next/dynamic";import CodeSampleModal from "../components/CodeSampleModal";// Dynamic Importconst CodeSampleModal = dynamic(() => import("../components/CodeSampleModal"), { ssr: false,});
Optimizing Fonts
Tutorial
Next.js has built-in Automatic Webfont Optimization. By default, Next.js will automatically inline font CSS at build time, eliminating an extra round trip to fetch font declarations. This results in improvements to First Contentful Paint (FCP) and Largest Contentful Paint (LCP).
Tutorial
Embedding third-party authored code can delay page content from rendering and affect user performance if it is loaded too early.
Next.js provides a built-in Script component that optimizes loading for any third-party script, while giving developers the option to decide when to fetch and execute it.
import Head from 'next/head'import Script from 'next/script'function IndexPage() { return ( <div> <Head> <script src="https://www.googletagmanager.com/gtag/js?id=123" /> <!-- Optimized --> <Script strategy="afterInteractive" src="https://www.googletagmanager.com/gtag/js?id=123" /> </Head> </div> )}
Next.js Analytics allows you to analyze and measure the performance of pages using Core Web Vitals.
Custom Reporting
It is also possible to use the built-in relayer Next.js Analytics uses and send the data to your own service or push them to Google Analytics.
Add the following to pages/_app.js.
export function reportWebVitals(metric) { console.log(metric);}