See SEO for more general information.

Status Code

200

HTTP 200 OK

301/308

HTTP 301 Moved Permanently Note: Next.js permanent redirects use 308 by default instead of 301 as it is the newer version and considered the better option.

// pages/about.js
export async function getStaticProps(context) {
  return {
    redirect: {
      destination: "/",
      permanent: true, // triggers 308
    },
  };
}

or

// next.config.js
module.exports = {
  async redirects() {
    return [
      {
        source: "/about",
        destination: "/",
        permanent: true, // triggers 308
      },
    ];
  },
};

302

HTTP 302 Found The HyperText Transfer Protocol (HTTP) 302 Found redirect status response code indicates that the resource requested has been temporarily moved to the URL given by the Location header.

404

HTTP 404 Not Found Resource not found.

export async function getStaticProps(context) {
  return {
    notFound: true, // triggers 404
  };
}

Custom 404 Page

// pages/404.js
export default function Custom404() {
  return <h1>404 - Page Not Found</h1>;
}

410

HTTP 410 Gone Target resource is no longer available at the origin server and that this condition is likely to be permanent. Used for content that are removed, e.g. threads deleted by user, blog post removed from site.

500

HTTP 500 Internal Server Error Next.js will automatically return a 500 status code for an unexpected application error. You can create a custom 500 error page that is statically generated at build time by creating pages/500.js.

// pages/500.js
export default function Custom500() {
  return <h1>500 - Server-side error occurred</h1>;
}

503

The HTTP 503 Service Unavailable server error response code indicates that the server is not ready to handle the request. Used when website is down and predicted that it will be down by an extended period of time. This prevents losing rankings on a long-term basis.

robots.txt

See robots.txt. robots.txt specifies which routes can be crawled and which are not. In Next.js, add robots.txt to public folder.

// robots.txt
# Block all crawlers for /accounts
User-agent: *
Disallow: /accounts

# Allow all crawlers
User-agent: *
Allow: /

File available at http://<host>:<port>/robots.txt.

Sitemaps

Read Sitemaps.

XML Sitemaps

As the name implies, it’s a map of your site, telling search engine how to crawl. Sitemaps are the easiest way to communicate with Google. They indicate the URLs that belong to your website and when they update so that Google can easily detect new content and crawl your website more efficiently.

According to Google: You might need a sitemap if:

  • Your site is really large. As a result, it’s more likely Google web crawlers might overlook crawling some of your new or recently updated pages.
  • Your site has a large archive of content pages that are isolated or not well linked to each other. If your site pages don’t naturally reference each other, you can list them in a sitemap to ensure that Google doesn’t overlook some of your pages.
  • Your site is new and has few external links to it. Googlebot and other web crawlers navigate the web by following links from one page to another. As a result, Google might not discover your pages if no other sites link to them.
  • Your site has a lot of rich media content (video, images) or is shown in Google News. If provided, Google can take additional information from sitemaps into account for search, where appropriate.

Sitemaps are not necessary, but still strongly recommended for better performance. Always try to make sitemaps dynamic as new content is added.

Reference

Link to original

Add sitemaps to Next.js

Manual

For a simple site, add public/sitemap.xml

<!-- public/sitemap.xml -->
<xml version="1.0" encoding="UTF-8">
  <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
    <url>
      <loc>http://www.example.com/foo</loc>
      <lastmod>2021-06-01</lastmod>
    </url>
  </urlset>
</xml>

getServerSideProps

Generate dynamic sitemaps with getServerSideProps. Make file pages/sitemap.xml.js.

// pages/sitemap.xml.js
const EXTERNAL_DATA_URL = "https://jsonplaceholder.typicode.com/posts";
 
function generateSiteMap(posts) {
  return `<?xml version="1.0" encoding="UTF-8"?>
   <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
     <!--We manually set the two URLs we know already-->
     <url>
       <loc>https://jsonplaceholder.typicode.com</loc>
     </url>
     <url>
       <loc>https://jsonplaceholder.typicode.com/guide</loc>
     </url>
     ${posts
       .map(({ id }) => {
         return `
       <url>
           <loc>${`${EXTERNAL_DATA_URL}/${id}`}</loc>
       </url>
     `;
       })
       .join("")}
   </urlset>
 `;
}
 
function SiteMap() {
  // getServerSideProps will do the heavy lifting
}
 
export async function getServerSideProps({ res }) {
  // We make an API call to gather the URLs for our site
  const request = await fetch(EXTERNAL_DATA_URL);
  const posts = await request.json();
 
  // We generate the XML sitemap with the posts data
  const sitemap = generateSiteMap(posts);
 
  res.setHeader("Content-Type", "text/xml");
  // we send the XML to the browser
  res.write(sitemap);
  res.end();
 
  return {
    props: {},
  };
}
 
export default SiteMap;

Special Meta Tags for Search Engines

See Special Meta Tags for Search Engines.

MetaTag

Special Meta Tags for Search Engines

Meta robot tags are directives that search engines will always respect. Adding these robots tags can make the indexation of your website easier. Meta robots tags orrobots.txt files are directives and will always be obeyed. Canonical tags are recommendations that Google can decide to obey or not.

<meta name="robots" content="noindex,nofollow" />

The robots tag is very common, by default, it has value index,follow, all is also valid.

  • noindex
    • Not show this page in search results.
    • Omitting noindex will nidicate the page can be indexed and shown in search results.
    • Sample Use Cases: Settings, policies, internal search pages.
  • nofollow
    • To not follow links on this page.
    • Omitting this will allow robots to crawl and follow links on this page. Links found on other pages may enable crawling, so if link A appears in pagesX and Y, and X has a nofollow robots tag, but Y doesn’t, Google may decide to crawl the link.

You can see a full list of directives in the Google official documentation.

Googlebot Tags

<meta name="googlebot" content="noindex,nofollow" />

Googlebox specific tag.

<meta name="google" content="nositelinkssearchbox" />

When users search for your site, Google Search results sometimes display a search box specific to your site, along with other direct links to your site. This tag tells Google not to show the sitelinks search box.

<meta name="google" content="notranslate" />

Ask google not to provide a translation for the page.

Reference

Link to original

Sample Code

import Head from "next/head";
 
function IndexPage() {
  return (
    <div>
      <Head>
        <title>Meta Tag Example</title>
        <meta name="google" content="nositelinkssearchbox" key="sitelinks" />
        <meta name="google" content="notranslate" key="notranslate" />
      </Head>
      <p>Here we show some meta tags off!</p>
    </div>
  );
}
 
export default IndexPage;

Canonical Tag

See Canonical Tag.

Canonical Tag

A canonical URL is the URL of the page that search engines think is most representative from a set of duplicate pages on your site.

If Google finds several URLs that have the same content, it might decide to demote them in search results because they can be considered duplicated.

This also happens across domains, if you run two different websites and post the same content in each one, search engines can decide to pick one of them to be ranked, or directly demote both.

This is where canonical tags are extremely useful. They let Google know which URLs are the original source of truth and which are duplicated. Lots of duplicated pages across same or different domains can lead to bad rankings or even penalizations.

Both are valid, working URLs, but we use canonical to prevent the detection of duplicate content that we own. If we decided that https://example.com/products/phone should be considered for rankings, we would create a canonical tag:

<link rel="canonical" href="https://example.com/products/phone" />

Reference

Link to original

Rendering Strategies

The most important thing for SEO is that page data and metadata is available on page load without JavaScript.

  • SSG (Static Site Generation)
  • SSR (Server-Side Rendering)
  • ISR (Incremental Static Regeneration)
    • When there is a large amount of pages taking too much time to generate at build time. Next.js allows you to create or update static pages after you have built your site.
    • Incremental Static Regeneration enables developers and content editors to use static generation on a per-page basis, without needing to rebuild the entire site. With ISR, you can retain the benefits of static while scaling to millions of pages
  • CSR (Client Side Rendering)

See Rendering Strategies and Nextjs Basics for more details.

AMP

Nextjs supports AMP.

Performance & Core Web Vitals

See SEO for details.

Auto Image Optimization

Tutorial

On-demand Optimization

Instead of optimizing images at build time, Next.js optimizes images on-demand as users request them. Unlike static site generators and static-only solutions, build times don’t increase, whether shipping ten images or ten million images.

Lazy Loaded Images

Images are lazy loaded by default. Page speed won’t be penalized for images housed outside of the viewport. Images only load when they come into view.

Avoids CLS

Images are always rendered to avoid Cumulative Layout Shift (CLS).

Sample Code

import Image from "next/image";
return <Image src="" alt="" width={1920} height={1080} />;

Dynamic Imports

Tutorial 1 Tutorial 2 Goal: reduce the amount of JavaScript loaded during initial page load from third-party libraries.

import dynamic from "next/dynamic";
import CodeSampleModal from "../components/CodeSampleModal";
 
// Dynamic Import
const CodeSampleModal = dynamic(() => import("../components/CodeSampleModal"), {
  ssr: false,
});

Optimizing Fonts

Tutorial Next.js has built-in Automatic Webfont Optimization. By default, Next.js will automatically inline font CSS at build time, eliminating an extra round trip to fetch font declarations. This results in improvements to First Contentful Paint (FCP) and Largest Contentful Paint (LCP).

// regular version
<link href="https://fonts.googleapis.com/css2?family=Inter" rel="stylesheet" />
// optimized fonts
<style data-href="https://fonts.googleapis.com/css2?family=Inter">
  @font-face{font-family:'Inter';font-style:normal.....
</style>

Optimizing Third-Party Scripts

Tutorial Embedding third-party authored code can delay page content from rendering and affect user performance if it is loaded too early. Next.js provides a built-in Script component that optimizes loading for any third-party script, while giving developers the option to decide when to fetch and execute it.

import Head from 'next/head'
import Script from 'next/script'
 
function IndexPage() {
  return (
    <div>
      <Head>
        <script src="https://www.googletagmanager.com/gtag/js?id=123" />
		<!-- Optimized -->
		<Script
			strategy="afterInteractive"
			src="https://www.googletagmanager.com/gtag/js?id=123"
		/>
      </Head>
    </div>
  )
}
 

Monitoring your Core Web Vitals

Link

Next.js Analytics

Next.js Analytics allows you to analyze and measure the performance of pages using Core Web Vitals.

Custom Reporting

It is also possible to use the built-in relayer Next.js Analytics uses and send the data to your own service or push them to Google Analytics. Add the following to pages/_app.js.

export function reportWebVitals(metric) {
  console.log(metric);
}

Data Studio

https://nextjs.org/learn/seo/monitor/data-studio Use Chrome User Experience Report dataset.

Reference