Detlef Johnson – Search Engine Land News On Search Engines, Search Engine Optimization (SEO) & Search Engine Marketing (SEM) Mon, 25 Jan 2021 20:22:31 +0000 en-US hourly 1 Chrome 88 Adds to Core Web Vitals DevTools /chrome-88-adds-to-core-web-vitals-devtools-345502 Fri, 22 Jan 2021 16:57:28 +0000 /?p=345502 Chrome 88 allows you to view your site's LCP, FIP, and CLS data in dev tools.

The post Chrome 88 Adds to Core Web Vitals DevTools appeared first on Search Engine Land.

Newly released Chrome 88 adds features for developers looking forward towards the upcoming Page Experience update, which adds the relatively new Web Vitals metrics to any pre-existing user experience ranking signals.

Web Vitals measure several things which are then collated into three summary scores as Core Web Vitals: Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift. Each of these must be optimized to reach performance (timings) thresholds to avoid suffering against more performant peers who might appear above you in Google rankings.

Two new features of note

The first new feature of note for us is that Chrome 88 now supports a CSS property called aspect-ratio. Aspect ratio rules allow you to define ratios for elements that, under certain circumstances, can help you optimize Cumulative Layout Shift. To date, you could define either a width or a height HTML attribute in an image tag, and most browsers will try and calculate the missing attribute.

A designer would do this with variable images, such as those supplied by users, to fit the result into a layout system. This capability is now available for you to apply as a CSS rule to images as well as other kinds of elements. Among other benefits, this new rule can help you better plan responsive layouts without having to resort to hacking percentage dimension calculations to achieve a final layout look.

Web Vitals lane in Chrome 88 DevTools

The other exciting new feature is that Web Vitals now gets its own reporting lane in Chrome 88 DevTools. While the timings lane flags for these (and a few other) metrics have been available for some time, there is lots of new space reserved in the new Web Vitals lane for even more detailed reporting.

As it is, the flags in the Web Vitals lane are color-coded with green for a passing score and red for failure to achieve a good performance threshold. Hovering your mouse over a particular flag brings up the identifying abbreviation of the metric name and the recorded timing in milliseconds. The colors for metrics in the timings lane are not indicative of a score.

Layout Shift (red flag score of 748.4 ms) in the new DevTools Web Vitals reporting lane.

There is also a new long tasks reporting area which can be used to line up with main thread events to detect which scripts are being evaluated so that you can troubleshoot whether you can optimize or eliminate the code.

In order to add the new Web Vitals lane to DevTools, navigate to the Performance tab and select the associated Checkbox. You can do this even after reporting data has been collected in case you run a report before selecting the checkbox, and you can close Chrome which will remember how you left the state of the checkbox after the last recording when you reopen it.

Web Vitals checkbox in the Chrome DevTools Performance tab

Keep in mind that any particular score is a summary of other metrics you should look to find activity preceding or lining up with a failing score milestone in order to figure out what might be causing the problem. The Network and CPU reporting lanes can help you detect what it is. You might find references to render-blocking scripts for you to evaluate and image loading events where you might discover the opportunity to compress or resize them.

Cold start performance reports

For these metrics to be most accurate you’ll need to load a performance report without assets currently stored in browser cache. Start a recording and hold shift while refreshing the page in the browser which should force your browser to load all resources from the network. You can also select the reload button in DevTools under the Performance tab. Be forewarned, however, that the resulting report won’t necessarily capture everything you want.

Both approaches are valid. A cold start with an open-ended recording process lets you decide when to start collecting information with shift-refresh and when to process the result into the final report when you click to stop the recording process. You have selection capability to narrow in and magnify areas of concern along the reporting timeline once your report is loaded. Chrome also stores a history of reports which you can clear when it’s time to analyze another page.

As is true with all rankings signals, no single factor is going to boost your score to number one across the board. Keep in mind that poorly performing websites can still rank well when content is indexable and of a quality that attracts search users. As for Technical SEO, Core Web Vitals isn’t an indication that your content is indexable at all. That should serve as a warning that there can be lots more to troubleshoot once you’ve achieved excellent performance scores.

Why we should care

The Page Experience update coming up in a matter of months is reason enough for us to care about our Core Web Vitals scores. Google has given us ample time to prepare our sites for the change. As optimization professionals, we seek every advantage we can use to improve rankings and Web Vitals is something that undoubtedly improves things both for users and search engines.

The post Chrome 88 Adds to Core Web Vitals DevTools appeared first on Search Engine Land.

Google’s Lighthouse is now recommending JavaScript library alternates /googles-lighthouse-is-now-recommending-javascript-library-alternates-344940 Tue, 05 Jan 2021 19:41:43 +0000 /?p=344940 Lighthouse now serves as a beacon for warning you when your project includes old libraries that have modern alternatives.

The post Google’s Lighthouse is now recommending JavaScript library alternates appeared first on Search Engine Land.

Using the notion of a lighthouse as a metaphor, Google Lighthouse steers developers away from the rocks by shining light at issues it discovers on an asset-by-asset basis. With specific feedback for performance and security improvements, reports include references to media that could use resizing with compression, new or different cache policies, and linked files that contain blocks of unused CSS and/or JavaScript.

Until September, however, Google wasn’t calling out JavaScript libraries themselves.

And now, the warnings have graduated to appear in Page Speed Insights.

A word about JavaScript

In the open-source JavaScript world, developers stand on the shoulders of the developers who came before them, especially those who solved something that otherwise would have to be solved by a new library author. The prevalent JavaScript packaging system NPM (Node Package Manager) eases the inclusion of preexisting libraries in your project. At the starting point, a given JavaScript project is always the tip of an iceberg made of much more JavaScript underneath, usually stored by NPM in the .node_modules directory.

It stands to reason that projects, especially those created using a sophisticated framework, would only ever make use of a tiny percent of the code available to it, largely through numerous library dependencies. That’s why there’s an optimization process known as “tree shaking” for bundling only what’s actively being used by a given project as much as possible. Tree shaking doesn’t always work well with older libraries due to the moving goalposts of keeping up with modern syntax and coding patterns.

About Frameworks

Frameworks make life easier for developers by removing the complexity of making your library choices, configuring bundlers, and writing scripts to automate optimization processes for production. With a “quick start” recipe found in most documentation, developers can get up and running prewritten command line interpreter scripts that come as part of most packaged frameworks. An example of this is the Create React App which scaffolds up blank React application code ready for you to develop further into a web application.

$ npx create-react-app my-app

Running the above command creates a “my-app” directory and spawns the entire React app directory tree into it, complete with all the required files and library dependencies. Optimizing your production bundle, which can contain code from several Javascript libraries, is perhaps the most important reason modern framework packages have tools and steps that weed out unused blocks of code and minimize output for production.

When you choose to use a framework you accept, perhaps without knowing all the details, the difficult decisions of architecture, configuration, and library dependencies of that framework. Here is the guide for optimizing React for production from the makers of the popular frontend library currently used by many projects and frameworks such as NextJS.

It’s all too common that a handful of older libraries (highly useful in their day) have found their way into project bundles as dependencies. Websites, libraries, and frameworks authored prior to JavaScript milestones show their age when using deprecated code as JavaScript fundamentally progresses at such a breakneck pace. Lighthouse now serves as a beacon for warning you when your project includes old and or vulnerable code.


A notable library, MomentJS (with 12 million downloads per week as of September 2020), is the first one Lighthouse points out as having a few better options. Google’s logic here is irrefutable and rather well known. In response, Moment’s own homepage and documentation now mirrors the advice provided in Lighthouse’s reporting. Moment is feature frozen, with only stability updates planned.

Other libraries that Google has under the microscope are Lodash and possibly Underscore. There have been legitimate negative feelings expressed about this throughout the developer community, with more than one developer calling it “toxic” or “harmful” to the open-source community. The complaints have to do with Google “shaming” libraries without giving enough context and promoting alternatives that can harm the discovery of smaller and lesser-known library alternatives.

While all that is true, it’s also true that you have to break eggs in order to make an omelet. Progress often will hurt some folks. Google relies on a third-party reference (BundlePhobia) for collecting alternate library lists. They further vet choices based on a “high bar for equivalency” and “ease of migration” as determined by the Lighthouse team.

Lesser-known library authors can get in the mix by submitting their library to the service. Additionally, a developer commented that since Google is now officially recommending libraries, they should help finance open-source by donating to the project and a team member on Twitter has agreed to begin doing this starting in 2021.

What this means for TechSEOs

It’s true that TechSEO practitioners do not have to be developers in order to be excellent at their job. It’s also true that the more familiar you are with the struggle developers undergo, perhaps by learning a little about coding and at least understanding the details of Google’s Lighthouse as much as possible, the better you will be able to communicate problems and practical solutions to developers.

Replacing MomentJS wholesale may be anywhere from super easy, to frighteningly complex for the recipient of the news that it needs replacing. Unless you are a developer yourself, or have at least tried dabbling in a little web development using modern JavaScript libraries and frameworks, it will be difficult for you to know just how painful switching out a library like Moment for a smaller alternative may be for a particular project.

It has to do with how much that library has been integrated into the codebase. Code patterns may have to be completely rewritten throughout an application. The bigger and more interconnected the application codebase is, the harder it will be to accomplish the replacement. It isn’t always as simple as saying “Just use one of the alternate libraries that Google is telling you to use instead.”

One of the daunting thoughts that can occur when learning one has to replace Moment comes from the fact that Objects in the library are written to be mutable (changeable). Keeping this pattern is seen as necessary for backward compatibility and this really complicates replacing Moment with another library. Whole blocks of code may need to be written in order for your application to accommodate the fact that variable values assigned using Moment somewhere in a call chain can’t be counted on as immutable values inside your application codebase.

A modern pattern which relies on immutable Objects from a newer library is more stable. To get there can require a lot of rewriting every instance where Moment gets used.

SEO for Developers

Optimizing JavaScript for production as a task is very much in the wheelhouse of the TechSEO working on code, or providing feedback to a developer who may be unfamiliar with SEO, as developers should understand performance optimization in the first place. It can’t be presumed that developers know about BundlePhobia, Lighthouse, or SEO for that matter.

If you made it this far and wish to know more about coding in order to provide a better service to your clients, then you’re in luck. This year, we’re going to be producing the SEO for Developers Workshop as an optional add-on to the SMX conference series. The information presented will be targeted at guiding your journey from wherever you are as a TechSEO practitioner, to wherever our collective paths lead us into coding. Given how fast things are progressing the sky’s the limit!

The post Google’s Lighthouse is now recommending JavaScript library alternates appeared first on Search Engine Land.

SEO for reactive JavaScript using React or Vue with NodeJS and other backend stacks /seo-for-reactive-javascript-using-react-or-vue-with-nodejs-and-other-backend-stacks-340360 Tue, 20 Oct 2020 14:51:56 +0000 /?p=340360 How to get SEO-friendly website pages to feel like there's magic burbling up beneath them.

The post SEO for reactive JavaScript using React or Vue with NodeJS and other backend stacks appeared first on Search Engine Land.

During our live discussion on how to Power SEO Friendly Markup With HTML5, CSS3, And Javascript earlier this year, we spent a good deal of time talking about React. I’m going to dig into the nuances for React what you need to keep in mind for SEO. We’ll be using code for analysis by Russ Jeffrey, director of strategic integrations at Duda, who participated in the discussion.

To React, or not to React?

A react framework can make website (application) pages feel like there’s magic burbling up beneath them as dashboard details are kept in concert with a community of lively users, kind of like the experience you expect from Facebook and Twitter.

Business requirements must dictate what technology you end up using on a project, however. If you are writing an app that needs a Facebook type of dynamic, then you’re going to want to invest in creating a reactive framework to deliver it. Yet, in reality, very few sites have those kinds of requirements. In most cases you are better off with jQuery, or vanilla JavaScript if you can get away with it, for basic performance reasons.

If you use one reactive element on a handful of pages, then there are ways to confine your reactive code to where and when it is needed. Write server-side, (or even client-side), conditional code to load it. If all you want to do is power popup modals, interactive menus, and tabbed content etc., then the cost vs. benefit consideration of reactive libraries won’t favor them against other approaches.

Rendering strategies for SEO

You will definitely need to think differently about how and when your web app renders important content for SEO. That is the learning Russ brought us in the second half of our discussion. He shows examples for how to build server-side and ship an optimized app shell alongside JavaScript so crawlers can get what’s necessary for SEO and the remainder is rendered client-side using ‘hydration.’

Watch the full discussion here.


Application programming in this context typically involves more than the frontend library alone. A convention over configuration file layout design with utility scripts can constitute what is referred to as a framework for React or Vue. Next and Nuxt are NodeJS-based React and Vue frameworks respectively.

Frameworks simplify scaffolding project files and services according to conventional specifications and best practices. Russ provided us with links to GitHub projects demonstrating how to integrate React and Vue with several other popular backend programming languages. Check them out if you want a different runtime process on the backend than NodeJS.

SEO code snippets with NodeJS

When you reach enterprise or startup level requirements, a service level agreement with a framework may not be possible. Russ walks us through getting started with SEO code snippets based on NodeJS (with Express) alone.

Three key files in both sample React and Vue projects contain the necessary code for our analysis:

  • The app.js file governs a “blog” app shell.
  • The server.js file pulls in the Express library, configures it for request handling including render methods for SSR.
  • The index.js file, serves as the entry point for the NodeJS runtime process.

SEO-friendly React

The App.js example for React demonstrates routing SEO friendly paths to URLs that don’t rely on fragments for SPA-style virtual page views. In server.js a ‘context’ data object implies resource details for calling ReactDOMServer.renderToString() to render our app shell with context based on URL and potentially other criteria.

The server.js file has the data context object for more refinements. Russ demonstrates replacing title and other meta data to complete the SEO for a constructed app shell before finally sending it on its way to the browser.

Lastly, index.js serves as the starting point for the NodeJS process and ReactDOM.hydrate() is used to flesh out our app with less important auxiliary content after the shell loads.

A ‘build’ directory contains index.html as a target file for SSR template construction. Two other files, the components Home and Posts, are stored using a .js extension, which is implied by convention and therefore does not need to be explicitly spelled out in the import statements. We’re going to expediently skip analysis of component files, except to say that it’s typical to find component files re-organized into a components subdirectory.

In Russ’s examples, all files, including component files, are located in the project base directory. This is what the directory tree for the React files looks like:
├── App.js
├── Home.js
├── Posts.js
├── build
│   └── index.html
├── index.js
└── server.js

If you’re not familiar with JSX syntax, it is a JavaScript language extension meant for React component encoding with XML so that template files can include JavaScript. Child components are imported and get referenced later by name convention (Home.js and Posts.js map to <Home/> and <Posts/> respectively) in the XML template block.

Russ makes use of react-router-dom libraries (which you may need to install via NPM): Router and StaticRouter, Switch, and NavLink. These libraries provide readymade helpers for convenience with common tasks, such as using the NavLink ‘to’ helper to generate HTML links to a url path or resource, which is somewhat analogous to Rail’s ‘link_to’ helper.

Through Switch and Router in App.js Russ demonstrates the syntax for matching url paths. Notice the ‘exact’ keyword for the index path statement. It’s required to match only exactly ‘/’ or it would otherwise match every path of the application! Using the ‘exact’ keyword changes the default criteria from greedy match which matches ‘/posts’ and also ‘/posts/hello-world’ etc.

In server.js Russ makes use of a common external framework called Express to set up our application with the required port listener and response methods needed to serve our app on the network. If you are working locally, you’ll want to establish a local environment variable PORT to match an open port that you plan to work with via localhost requests. In production, this will usually need to get set to port 80.

Vue framework

Lastly, Vue framework is one of the more approachable reactive frameworks and that concept is felt from the start. Template files are literally HTML with handlebar-style JavaScript interpolation. If you’re using Vue, then it’s more likely that you’re working with your own backend, although Nuxt is a state of the art framework for Vue if you decide to go that route.

Russ points us to the Vue SSR documentation and his code snippets operate using essentially the same file structure and layout for implementation of the Vue version of our blog app. Find the code for both React and Vue available below in the following Gist.

The post SEO for reactive JavaScript using React or Vue with NodeJS and other backend stacks appeared first on Search Engine Land.

Google opens the source for its robots.txt parser in Java and testing framework in C++ /google-opens-the-source-for-its-robots-txt-parser-in-java-and-testing-framework-in-c-340920 Wed, 23 Sep 2020 12:15:42 +0000 /?p=340920 The new releases are from Google's Search Open Sourcing team.

The post Google opens the source for its robots.txt parser in Java and testing framework in C++ appeared first on Search Engine Land.

Last year, Google open sourced the code for the robots.txt parser used in its production systems. After seeing the community build tools with it and add their own contributions to the open source library, including language ports of the original parser written in C++ to golang and rust, Google announced this week it has released additional related source code projects.

Here’s what’s new for developers and tech SEOs to play with.

C++ and Java. For anyone writing their own or adopting Google’s parser written in C++ (a super fast compiled language), Google has released the source code for its robots.txt parser validation testing framework used to ensure parser results adhere to the official robots.txt specification as expected, and it can validate parsers written in a wide variety of other languages.

Additionally, Google released an official port to the more popular Java language. Modern Java is more widely used in enterprise applications than C++, whereas C++ is more typically used in core system applications where performance needs demand it. Some Java-based codebases run applications today for enterprise SEO and or marketing software.

Testing and validation. Requirements for running the test framework include JDK 1.7+ for Apache Maven, and Google’s protocol buffer to interface the test framework with your parser platform and development workstation. It should be useful to anyone developing their own parser, validating a port, or utilizing either of Google’s official parsers, and especially for validating your development of a port to a new language.

How difficult would this be to use? We should note these are relatively approachable intern-led projects at Google which ought to be consumable by moderate to higher level programmers in one or more of these languages. You can build a robots.txt parser using practically any programming language. It adds perceived authority, however, when your marketing application runs the exact same parser that governs Googlebot.

Why we care. If you, or your company, has plans to write or has written a crawler which parses robots.txt files for directives looking for important information (not just) for SEO, then this gives you incentive to evaluate whether using Google’s parser in C++, Java, or one of the other language ports is worth it. The Java parser in particular should be relatively easy to adopt if your application is already written in Java.

The post Google opens the source for its robots.txt parser in Java and testing framework in C++ appeared first on Search Engine Land.

Power SEO Friendly Markup With HTML5, CSS3, And Javascript /power-seo-friendly-markup-with-html5-css3-and-javascript-338971 Thu, 20 Aug 2020 18:26:43 +0000 /?p=338971 Live with Search Engine Land: SEO Developers tackle server side rendering (SSR) with lively discussion around communicating technical SEO with developers.

The post Power SEO Friendly Markup With HTML5, CSS3, And Javascript appeared first on Search Engine Land.

The first session in our initial three-part series SEO for Developers: Live with Search Engine Land we began covering Technical SEO and communicating issues between practitioners and developers. For a well-rounded conversation we hosted speakers with perspectives from both the practitioner side of things, as well as practical effectiveness with SEO In-House as part of an enterprise team. Our guests were:

The video is great if you’re looking to hear new ideas for effecting change with your clients, with developers, or wanting to be more effective from within your organization. Learn about team building tactics with developers in the mix as well as struggles you may face when you’re not part of an organization.

The session continued in a second part focused on fundamental JavaScript SEO complete with code examples for React and Vue to give you a running start with those projects. Learn to solve some indexing problems with these popular JavaScript (framework) libraries and find tips you need for requirements to implement SEO into similarly scaffolded or boilerplate javascript projects.

Communicating requirements of Technical SEO to developers

Depending on your situation, communicating SEO to developers can range from feeling like you’re always walking on eggshells and being very careful not to tread on ego or territory, to another extreme where you experience sheer frustration that however much you stress the importance of a needed change, it seems hopeless, like as if you’re shouting into the void.

How can you best navigate personality problems? There are things you can do to boost your odds of success, or otherwise avoid common pitfalls, and getting this information out to you was the goal of our discussion with the first part of our first session.

Anthony and Katie shared tales of how, starting with a grim outlook, they were able to ultimately succeed in partnership with developers, or just succeed anyway. You can hear how both pulled all the stops to try and be persuasive using everything from homemade cupcakes to bottles of vodka. While these are often cited as helpful tactics, in practice these ideas didn’t work for them.

Major site changes

During a major site change more than one aspect of a technology stack can change over a short period of time. When you have a correlating loss of traffic, you might associate a drop with the incident and a particular aspect of the technology at that point. That’s when Technical SEO skills and knowing the problem can come to a crossroads where you need to convey your findings to developers who might disagree with you about which path to take.

I did a forensic audit and found technical issues. The lead dev was like: “How do we know it’s not something else?” The answer is, you really don’t know. You just have a gut instinct and a lot of experience to be able to try and guide it in that way. We made the changes and right before the busiest time of year we saw a 40 point swing to the upside with millions in additional revenues. The GM had said: “I’m in awe. You know, this is great.” At that point the lead dev decided to re-platform to React.

Anthony Muller

There’s always a chance developers have a bias towards a technology that they’re comfortable with, or excited to be using. As developers, we like to think of ourselves as not holding an unwarranted bias for a technology, but in reality we want to control our own programming environment. We aren’t always able to and when we can we might have a preference, same as anybody else.

When there’s money on the line you have to counteract any favoritism which can require self-analysis. Problems will arise when ulterior motives give us an inclination to use inappropriate technology as a way to use what’s most familiar or gain experience with the latest JavaScript libraries.

Problems of a technology choice aren’t always developer-borne issues. In our third video, Martin Splitt spoke of developing a banking application with Angular. Angular, unfortunately, then became the anointed technology to use for everything. That was a mistake of leadership assuming a solid technology choice in one area of business is a safe bet everywhere else.

Things are never that easy.

The trouble with React is …

ReactJS is a terrific User-Interface (UI) builder for the frontend. Confusion arises when developers want to simplify the notion of a webpage down to that of a UI when it’s not only that. A webpage can be interactive with JavaScript in ways that do not require a UI. Using React in certain conditions will lead to over-engineering with a result that we have a history of Single Page App (SPA) websites that typically don’t rank well.

What’s more, the underlying technology stack powering React is not ideally suited for static websites even though it can certainly be used for them. For example, there’s Gatsby, a Static Site Generator (SSG) built on React and its conventions. Believe it or not, plain old boring jQuery is still a far more appropriate choice for most static sites than Gatsby.

React is definitely an important innovation. When you need reactive page elements as part of site functionality, in other words, elements that change when universal or user-specific data changes, that’s when React becomes an excellent choice! You get all the advantage of a paradigm shift from jQuery to a component-based reactive library for developing cutting-edge interactivity. For example: If you want to roll your own chat, look into React.

Developers only need to avoid using React in cases where jQuery or vanilla JavaScript is what’s actually called for. Therein lies the problem, because they aren’t inclined to avoid using arguably the greatest client-side library innovation since jQuery. They all want to sharpen their knowledge of the latest greatest thing for employability. There are numerous open jobs for React programmers. We’re going to learn how to set it up correctly.

Server-side rendering

A partial solution to the problem, known as Server-Side Rendering (SSR), is probably best described as a ‘hack’ bolted in place after feedback that early renditions of these libraries were not search engine-friendly. Russ describes how React still tends to promote scaffolding or boilerplate that defaults to Client-Side Rendering (CSR) by convention. He shows us how to set yourself up for SEO with React and Vue.

A note about Evergreen Chromium

Evergreen Chromium keeps Googlebot up to date with the latest Chrome version. Google can now fetch CSR content fairly easily, but it’s certainly no silver bullet. Developers may think it means SSR is unnecessary, but for Googlebot your critical content is not immediately available and it may not be available at all without taking careful measures to ensure that it is.

It’s certainly not ideal for SEO, either. Even when you might fare a little better now with Google than in the past, you need to consider social media crawlers. Bing switched to Evergreen Chromium, but Facebook and Twitter haven’t done so yet and who knows if they ever will?

How about operationalizing SEO?

Working from within an organization, and with a sizable development team, Katie found that filing issues through the ticketing process wasn’t working fast enough for technical SEO changes. Additionally, there was no way for her to gauge the relative importance of her SEO requests versus whatever else the development team was working on.

After attending Search Marketing Expo (SMX) West’s keynote with Jessica Bowman (In-House SEO), Katie was inspired to try a different approach.

She was talking about operationalizing SEO and saying that anyone touching the website could be making multi-million-dollar SEO decisions without realizing it. You’re always going to be outnumbered by people who are touching it. There’s never enough SEOs to have an SEO in the room for all these things. If you feel like you’re running around chasing fires all the time then you need to operationalize SEO.

Katie Meurin

Katie brought her developer teammates to more SMX session content where, once back at work, they began to ask her questions about how the changes they were thinking of might impact the website’s SEO. This was the very breakthrough she needed for going from being caught outside in a separate silo to working inside with the development team.

Since team building sessions fostered these more productive communications, Katie continued to organize technical SEO trainings in-house and looks forward to a whole new build where SEO is a fundamental feature of the forthcoming new website.

The developers she worked with learned about using SEO tools and began using some of them directly in their workflow. They began testing development branch versions using command line SEO tools to make sure to realize good scores with Lighthouse and now Web Vitals. Any disagreements about SEO particulars would get resolved as it was typically just a matter of language that Katie’s team documentation helped clarify.

It was through these experiences that Katie was able to increase the priority of her technical SEO with the development team whose members came to truly appreciate knowing the value of the business impact of what they were doing. This was a huge shift going from not knowing whether her technical SEO tickets were prioritized above a mystery plate of work tasks to developers caring about SEO every bit as much as they might frontend design details.

Server side rendering (SSR)

So, what happened to Anthony’s client when they switched to React before Googlebot’s Evergreen Chromium release? Just imagine when 80% of revenue was tied dollar-for-dollar to tanking rankings. Anthony tried everything to be persuasive, including bringing an outside developer in to recommend implementing SSR.

To satisfy SEO requirements, you’re going to need SSR strategies that ship code with fleshed-out and optimized content, or your rankings will not reflect the value of your website pages.

The lead developer was (rightly) disappointed to hear advice to implement SSR, negating all the practical advantage of using a reactive library in the first place. The unwarranted technology preference for React with a static site was suddenly a technology obstacle which began to haunt them as technical debt they didn’t want to pay down.

The lead developer insisted on delivering alternative explanations for what was happening and for an inexplicable reason fully resisted the recommendation to move to SSR. In the meantime, Google launched its Evergreen Chromium initiative and the new Googlebot indexing resulted in a 7% traffic lift which allowed the developer to further delay the inevitable.

It was not enough to recover lost revenues and it ultimately became increasingly clear React was a bad choice of technology for powering the static website. Anthony’s SSR recommendation was finally put in place and imagine what happened when search traffic quickly rose back up by 60%. Imagine the difference unearned revenue made for the time spent languishing with such a basic and obvious rendering issue.

JavaScript SEO for React and Vue

Developers need to be flexible enough with skills and attitude to implement SSR for SEO with these popular JavaScript library (frameworks). Russ provided us with an excellent introductory dive into how to go about it with React and Vue along with quick tips about how to include essential SEO to go along with it. We’ll be covering that with all the details in our next installment before moving on to scraping by scripting with Puppeteer.

The post Power SEO Friendly Markup With HTML5, CSS3, And Javascript appeared first on Search Engine Land.

Guide to Core Web Vitals for SEOs and Developers /google-core-web-vitals-guide-for-seo-developers-337825 Tue, 21 Jul 2020 19:07:07 +0000 /?p=337825 What you need to know about Google’s latest set of key user experience metrics for Web Developers.

The post Guide to Core Web Vitals for SEOs and Developers appeared first on Search Engine Land.

Core Web Vitals, or just Web Vitals, are a new set of performance metrics that help highlight aspects of web page development that affect User Experience (UX): page loading, interactivity, and visual stability. Google is set to make Core Web Vitals ranking factors as part of the Page Experience Update some time in 2021.

These metrics center on when certain events complete, including what is interactive or visually affected as these events take place, while pages load until a point of stability relative to user experience. That means score values can change as users interact with your page. You achieve better scores when events occur faster along stop-watch time intervals.

Performance metrics for each Web Vital statistic is graded according to three outcomes:

  • Good (passes)
  • Needs Improvement
  • Fail

The current metrics are:

  • Largest Contentful Paint (LCP). The time interval between the start of a page load to when the largest image or text block in a user’s viewport is fully rendered. You might see the score change as your page loads and when content is visible but the largest node is still in the backlog yet to be displayed. This gets more noticeable on throttled connection speeds.
  • First Input Delay (FID). The amount of time it takes for a page to be ready for user interactivity, meaning that as pages are assembling how long does it take for the page to respond to clicks, scrolls, or keyboard input processing their corresponding event handlers. User interaction can be significantly delayed by main thread-blocking script tasks.
  • Cumulative Layout Shift (CLS). The measured distance and fraction of the viewport which shifts due to DOM manipulation or the lack of dimension attributes for major media elements. When we fail to define the dimensions for our hero images, for example, text on our pages first appears only to be displaced, causing a disruptive content layout “shift” for our users.

Long-time users of PageSpeed Insights (PSI) may be familiar with similar metrics, many of which are going to stick around, although perhaps not all of them. Core Web Vitals represent a culmination of these other metrics and come out of the Developer Experience complexity with them. The simplicity of Web Vitals are meant to cut through the noise for welcomed clarity and fewer, grander metrics to follow.

Google plans to limit updates to annual Web Vitals version releases in order to prevent the goal posts from moving too frequently for site developers and SEOs, but you can expect Google to add new metrics over time. It looks like the next addition is going to measure page animations; that metric is under development and won’t be introduced this year, for example.

How to analyze mobile and desktop Web Vitals scores

You get independent Web Vitals scores between mobile (phone) devices or desktop / laptop. In some tools you can specify which device category you want to run a query or test, and you can switch between them when both are available in a tool like Google PageSpeed Insights. PageSpeed Insights defaults to mobile stats, so you’ll need to switch to the desktop tab in order to see the difference in a page’s desktop versus mobile scores.

Google has added Core Web Vitals metrics to Search Console reporting when Chrome User Experience data is available. If you’re accessing Web Vitals scores within Search Console, the dashboard displays both device categories with scores across URLs covered by your indexing. You can drill down into groups of pages that are indicated to have problems.

As part of its Chrome User Experience Report (CrUX), Google exposes field data from over 18 million websites that have tallied enough statistics to report Web Vitals. The data is housed at Google’s BigQuery service, where you can query statistics from these websites going back several years. Updates are ongoing and available the second Tuesday of each month, following accumulation.

In order to see mobile and desktop scores using the new CrUX report, you’ll need ‘phone’ or ‘desktop’ as device form factors in your SQL statements. Interestingly, ‘mobile’ doesn’t work as it’s not a column and ‘tablet’ works only rarely due to the scarcity of the specified data. Tablet data can be seen in queries for the Google origin (domain), for example, but you aren’t going to see it for quieter sites.

Understanding lab vs. field data

Conditions can result in wildly varying scores, and scores can literally change as you navigate pages. It’s important to understand how each score is tabulated, given a certain environment.

You can only truly interpret results after you first determine whether you’re looking at lab or field data. Web Vitals “lab” data are collected via browser API as part of page load event timers and mathematical approximations simulating user interactivity, whereas “field” data is made up of the same metrics collected from actual user experiences navigating your pages with the resulting event timer values being transmitted to a repository.

Both SEO practitioners and developers can access lab data in real-time using PSI, WebPageTest, Chrome Dev Tools, and through a new ‘Web Vitals’ Chrome browser extension. PSI and WebPageTest tallies your scores from page load events and approximates page interactivity delays by counting up thread-blocking script task times.

Lab data tools are incredibly useful in your workflow towards reporting and improving scores. These should make up part of your TechSEO arsenal. For developers, if only a handful of templates power your website, then these lab data may be all that you regularly need, unless you start seeing problems in field data that catches you off-guard.

You can introduce the Web Vitals JavaScript library to your workflow and testing pipeline. Available via CDN, the library can be included in production HTML and written to transmit independently gathered field data to where you want to collate them for reports. Example code demonstrates how to do so for transmitting scores to Google Analytics.

Lighthouse also comes with various access points which can be useful in your development workflow and it includes several additional tests that can help ensure your adherence to modern web standards. Lighthouse can help you debug situations where you are troubleshooting Web Vitals problems.

Comparing lab results with field data. Modern browsers beginning with Chrome measure how users actually experience your website in the wild via builtin JavaScript API. You can access these with any JavaScript, or choose one of Google’s libraries modified to your requirements. Google collects and, as noted, exposes field data from Chrome users for its CrUX report and at times using the same browser APIs.

There are a few different ways to access or visualize CrUX data. You can utilize connectors from BigQuery output to other Google services for generating dashboards, such as a prebuilt connector for DataStudio.

It’s easier to access field data when you’ve confirmed that your site has field data in CrUX, by verifying ownership of your website with Google Search Console. There, the dashboard displays field data with an interface that allows you to drill down with clicks instead of writing SQL queries.

Alternatively, you can simply use PSI which provides you data going back up to 28 days. The API that drives that quick-check recency report is also an independent open source JavaScript library which you can bring into your development workflow, or use to power an app dashboard. It can serve as a standalone app where, for demonstration, a developer already created a slick frontend for it.

Troubleshooting Web Vitals reporting

Due to the dynamic nature of some of the timings and how they’re collected, you’re always going to need to verify lab data by correlating field data so you can debug discrepancies. For example, subsequent page loads can vary your result values when using the Web Vitals Extension. This can happen for a couple of reasons.

Your browser is able to assemble resources faster on refresh by virtue of utilizing its own cache reserve. Additionally, the extension is able to accumulate interactive values as you navigate the page in a way that is useful for approximating real-world field data rather than calculating a score by adding up thread-blocking script task timings.

For more accurate local results using the Web Vitals Extension and Chrome Dev Tools remember to empty your cache data or bypass it with shift-refresh when moving fast with the web browser in your workflow. Another tip is to load ‘about:blank’ before starting a performance recording session in Dev Tools for a clean start to the report.

Ideally, lab and field scores don’t differ too much without a good reason. Whenever you make significant changes, your lab results are going to be ahead of your field data. That means if you see failing tests in the field and you’ve improved lab scores to pass, you either need to be patient for field data to catch up or push field data independently to Analytics to verify it.

You might imagine the trickiest field data score to emulate locally would be CLS. This isn’t necessarily the case. You can set an option to stick an overlay of Web Vitals using the Chrome Extension where, when you interact with the page, you can watch score changes as you navigate.

This works for FID as well. The FID score starts empty. With the first page interaction (click, scroll, or keyboard input), the timings of thread-blocking tasks are added to that moment — which becomes your score.

Finally, the highly detailed information in Chrome Dev Tools lets you troubleshoot CLS to a fine-grained level with performance recording and playback. Look for the “Experience” section that outputs CLS shifts in the recording. There is also a setting for highlighting shifts in the display using a blue flash that wraps elements as they shift and add to your score.

Tool time

PageSpeed Insights. Your first stop measuring Web Vitals should be PageSpeed Insights. You get both lab data and field data (when available) in one report. You also get several other metrics largely related to improving failing pages, particularly findings that affect the speed of a page and downloading its assets.

Web Vitals Chrome Extension. Using the Chrome extension you can access Web Vitals on page load, and as discussed, you can interact with the page to troubleshoot in case you have First Input Delay and or Content Layout Shift problems. It’s also available to you page-to-page as you browse websites.

WebPageTest. With this independent testing tool you can configure your approach with a variety of conditions. Built by Google engineers who are part of the Chromium team, the information is as authoritative as anything you get from Google itself and makes RESTful APIs available.

Google Search Console. If you haven’t already verified ownership of your website to use Google Search Console, then you should go and do so for help drilling down into problem areas with pages that are failing out in the field — assuming you are showing in CrUX. You can drill down to locate groups of pages with similar problems. Ultimately it links you to PageSpeed Insights.

Web Vitals JavaScript APIs. Use JavaScript to access the metrics directly from the browser and transmit them to a repository of your choice. Alternatively, you can introduce the test to your development process and ensure that changes you make aren’t going to negatively affect your scores after you push to production.

Chrome Dev Tools. Chrome itself provides the ultimate set of tools for discovering or tracing back problems using the highly detailed information available in reports and page load recordings in the Performance tab. The extensive array of tools and endless switches and options are ideal for the most exacting optimization work.

The post Guide to Core Web Vitals for SEOs and Developers appeared first on Search Engine Land.

Technical SEO for Shopify: A guide to optimizing your store for search engines /technical-seo-for-shopify-a-guide-to-optimizing-your-store-for-search-engines-335251 Thu, 28 May 2020 12:00:00 +0000 /?p=335251 How to address the technical limitations in Shopify to improve your store’s search visibility.

The post Technical SEO for Shopify: A guide to optimizing your store for search engines appeared first on Search Engine Land.

Shopify’s platform makes it simple for beginner store owners to launch their e-commerce sites, but that convenience means that many of the technical aspects of your site have been decided for you. This introduces some Shopify-specific quirks that you may run into when optimizing your site architecture, URL structure, and use of JavaScript.

The basic search optimizations that all Shopify store owners should be aware of are already covered in our main Shopify SEO guide. In this guide, we’ll help you finetune your Shopify site’s technical SEO to help search engines crawl and index your pages.

Tackle duplicate pages

Shopify implements collections for grouping products into categories, making it easier for customers to find them. One drawback to this is that products belonging to collection pages create a duplicate URL issue. Technically speaking, Shopify collections is an array for storing Products semantically attached by convention (instead of by ID) to resource routing.


resources :products do
  collection do
    get ‘collections’

This maps the URL to a corresponding product controller action “collections” and resources.

When you associate a product with a collection page (as just about every merchant selling more than a handful of items is likely to do), Shopify automatically generates a second URL to that product page as part of the collection. This means you’ll have two URLs pointing to the same product; the URLs will appear as follows:

  1. /collections/shirts-collection/products/blue-shirt
  2. /products/blue-shirt

Duplicate content makes it more difficult for search engines to determine which URL to index and rank, and having multiple URLs for the same page can split your link building power because referrers may use either URL. With the use of the canonical tag you have an adequate solution for the first problem, but not the second.

The canonical tag was introduced by search engines to designate which spelling of URLs (that load the same content) is the preferred spelling for search results. There are many ways multiple spellings can come about, and Shopify’s use of collections generates duplicate URLs for products that belong to collections. Shopify points collection page canonical tags to the product page, which is excellent, but doesn’t solve the split links problem.

Eliminating alternate URLs. Fortunately, there is a known solution that you can use to resolve the issue within your templates, and you wouldn’t necessarily have to be familiar with the syntax (known as Liquid) that the template is built with. Although your mileage will vary depending on the theme you’re using, the edit you would employ should be fairly straightforward and involve an output filter “within: collection.”

To get started with this solution, you’ll first need to log into your Shopify account. From there, access your store’s theme by clicking on Online Store and then Themes from the left-hand sidebar. Next, click Customize in the Current theme section of the interface, as shown below.

In the next screen, click on Theme actions (located at the bottom of the left-hand sidebar) and then Edit code.

A new window will open, displaying your theme’s code templates to find the file that generates your collection links. In our case, we opened the Snippets folder and clicked on “{/} product-card.liquid” to locate the code we needed to edit. It appears on line 11, as indicated below.

You’ll need to edit that line of code from:

<a href="{{ product.url | within: collection }}" class="product-card">


<a href="{{ product.url }}" class="product-card">

This can be done by deleting the “ | within: collection” portion of the code. Older Liquid theme templates make reference to the now-deprecated current_collection which matches the solution in the forum link provided above. The important thing to find is the “|” pipe to the “within” filter name to Shopify’s collections and remove it.

The “within” filter builds the product link as part of the collection in the current collection context by switching what would otherwise be a rails route of “/products/:id” to “/collections/collection-name/products/:id,” for example. Since what we want is for such snippets to build links only to canonical URLs for the product, we simply remove the whole “within” filter and it works.

Overcome site architecture limitations

Another nagging problem associated with Shopify’s CMS is that your site architecture is part of the rigid structure of the Shopify rails implementation. The Shopify CMS is less flexible than WordPress, for example. If you want to perform seemingly straightforward optimizations to your site architecture, there isn’t always an obvious way to do so.

Shopify automatically generates URLs for product and collection pages. Store owners can only modify the last part of the URL (indicated in green).

For example, Shopify automatically generates the URL for your product detail pages using the following structure: Store owners can only customize the last part of the URL (indicated in green), which is derived from the page title.

If you want to remove the “/products” directory to tidy up your URLs, Shopify doesn’t have that option out-of-the-box. The SEO benefit of clean URLs is that users may be more inclined to click through from the search results, and other webmasters might be more likely to link to URLs which seem more authoritative by design. URLs with a path to a base directory page insinuate authority by virtue of a first-place directory rank and “/products” interferes with this. However, these paths are vital for your Shopify site’s backend to produce its dynamic content. Fortunately, there is an external workaround for this limitation.

The problem arises with conventional system frameworks, which often utilize request strings for organizational logic to access varying resources so that differing templates are provided the resources they need for publishing dynamic page content and or services. In Shopify’s case, it’s Ruby on Rails routing convention, where “/products/:id” maps URLs to product resources.

We’ve now tread into territory where you might be better off contacting a web developer unless you know how to write JavaScript and Ruby. These are more complex fixes for certain technical SEO problems that only a developer should implement.

Customizing site architecture with Cloud Workers. Using Cloud Workers from caching solution providers can allow for alternate delivery paths and even deliver alternate resources for requests. Cloud Workers can effectively mask Shopify’s “/products” and “/pages” URL paths.

With JavaScript Workers in the cloud, you have greater control over what’s available to you at the request/response cycle, which can mean control over practically anything. Examples include capturing requests for your sitemap.xml and robots.txt and serving alternate resources on-the-fly, injecting JavaScript into a page or groups of pages, and also cleaning up those pesky Shopify URLs to make keywords more prominent and pathnames more authoritative.

You would need to own a domain for this to work and have your registrar reset its domain name system (DNS) to point to Cloudflare, which can then provide you with the JavaScript Workers API at the point where your end-users are directed through Cloudflare by proxy to your origin server.

When you sign up for Cloudflare, you’ll log in and need to add DNS A records as well as collect Cloudflare nameserver addresses for use with your domain name registrar. Look for the DNS button as depicted above to get to the form for collecting the nameserver addresses and entering your website host IP address, where your website code is published.

You add your host provider’s origin server IP addresses by adding DNS ‘A’ (Authority) records using the interface as shown above (highlighted in green), and the Cloudflare nameserver addresses for use at your domain registrar (again, highlighted in green) which you will need to enter into name server fields at your registrar.

JavaScript Cloudflare Workers. Sign up for Cloudflare’s Workers and you now have access to introduce complex JavaScript for all manner of things. Configure which pages you want to begin managing with Workers by matching wildcard criteria to request URL structures, and you are then empowered with request and response objects to code against.

The following example would direct all conceivable URLs, including subdomains for the domain, through a Worker API at Cloudflare by proxy.

By clicking Workers and then the Add route button, you will be presented with a popup with the form for entering your URL route matching criteria. A wildcard syntax, which is the only dynamic matching operator you get, and being able to match several routes using the interface should be sufficient to match whatever you want. Refine your strategies to one route at a time since you can add several directed to one of several scripts.

Cloudflare has you set up a address for hosting your Workers script. You get an editing interface and a preview playground as an IDE directly in the Cloudflare front end. You’ll be able to change HTTP verbs and test how your Worker script responds before going live.

Solutions for things can run the gamut, from customizing response headers, Server Side Rendering (SSR), changing SEO by modifying titles, meta data, and content, redirecting, or even serving content from other hosts. In this way, you are enabled to perform SEO and/or full site migrations, including changes to architecture such as respelling “/pages” and “/products.”

This example demonstrates a few of these possibilities.

While there are references to other blocks and line 7 is incomplete, this block will parse a URL from the request object assigning it to the variable named “path,” and use it in a JavaScript switch statement to serve robots.txt and sitemap.xml Response objects, writing them directly without hitting the origin server at all. In the case of one particular request designated on line 8, it will serve a page from another domain instead, and before a fall-through default, line 9 will insert an include with the home page.

This example should provide you with a lot of food for thought about solving problems, including what you can do for Shopify issues. However, Cloudflare isn’t currently offering this as a self-service product for Shopify sites just yet, but it will after an initial testing period. The service has been live for over a year available to non-Shopify sites.

“The issue was that previously our system did not allow a direct customer to proxy in front of another customer, e.g., if you went to and signed up as a zone, we used to block you from proxying [with Shopify] for various technical reasons,” Patrick Donahue, director of product management at Cloudflare, told Search Engine Land.

Cloudflare’s enterprise sales team is currently working directly with Shopify merchants who want to onboard because there are manual steps involved that require assistance from their solutions engineers, Donahue explained. Cloudflare plans to make this option more widely available later this year.

More solutions may be on the horizon. Now that Cloudflare has solved the problem of sorting out the proxy configuration settings to get it to work with Shopify (which itself uses a caching solution), hopefully other caching solution providers will offer similar services as Cloudflare is preparing to with Shopify stores.

Looking for more ways to optimize and market your Shopify store?

Check out these resources:

The post Technical SEO for Shopify: A guide to optimizing your store for search engines appeared first on Search Engine Land.

What Safari’s 7-day cap on script-writeable storage means for PWA developers /what-safaris-7-day-cap-on-script-writeable-storage-means-for-pwa-developers-332519 Fri, 10 Apr 2020 16:54:57 +0000 /?p=332519 Now marketers know how their days are numbered.

The post What Safari’s 7-day cap on script-writeable storage means for PWA developers appeared first on Search Engine Land.

SEO for Developers. Detlef's tips for search marketers and programmers.

Confusion about an announcement of upcoming changes to Apple Safari’s Intelligent Tracking Prevention (ITP) led to accusations of Apple intentionally trying to destroy Progressive Web Apps (PWAs) “just as they were taking off.” It turns out that that’s not the case. However, the changes still have serious ramifications for web developers and marketers.

Developers face numerous challenges as browser support varies for features they might want to use with modern websites. It has always been daunting dealing with so much variance. Increases in complexity further affect deployment across a wide spectrum of services. If PWA application support in Safari was restricted to a 7-day period, it would seriously impede progress in an exciting area where significant effort is expended.

After 5-years of development, JavaScript-based PWAs provide developers with opportunities to extend website content to load offline, and for online content to refresh local documents stored using JavaScript. Unfortunately, some have abused the extension of storage from cookies to “localStorage” and application cache stores for tracking personally identifying variables.

It would be a shame if that abuse led to only seven days for all storage. Certainly indexedDB API and localStorage are affected by this policy change and developers need to take that into consideration. Apple has clarified its position specifically with respect to web app Service Worker registrations and cache.

Safari’s script-writable storage

The storage available via cookies is very restricted, and removing cookies after seven days for reasons that involve privacy and security, as is ITP policy, is justifiable. Extending that policy to also remove “script writable storage” is a logical next step, except that listing the example of “Service Worker registrations and cache” sounded off warning bells to developers of PWAs.

The Safari policy regarding cookies is not a strict seven-day time limit. It involves a counter for up to seven unused days. That means every time a user opens Safari and visits your website, your seven-day counter for cookies and script-writable storage is reset to another seven days. Empty days don’t count against you when the user doesn’t use Safari.

It’s when they open Safari and browse without visiting your website on a particular day that days are added to your tally. You have seven such days until your cookies and all “script writable storage” is removed. It’s the user inactivity with your site that counts against you. Users will need to revisit your site in order for you to be able to write storage and start with a new counter.

You get infinite days with PWAs.

That’s not good enough for PWAs. Apple recognizes that.

By virtue of the way PWAs work, once your app is added to the home screen it will never run up a seven-day tally. That’s because Safari itself is not loading your app (even though the launcher invokes Safari Webkit’s WebView object).

The launcher has its own counter that is entirely separate from Safari’s counter, and each app shell operates from within its own separated process. A self-referencing counter by your app can only reset itself on each use. Since it resets itself each time, never, say, opening a different app, you get infinity storage until and unless the user removes your app.

It is notable that the Webkit team addressed the confusion with the note: “If your web application does experience website data deletion, please let us know since we would consider it a serious bug. It is not the intention of Intelligent Tracking Prevention to delete website data for first parties in web applications.”

Why should we care?

PWA development and use is picking up steam. As a Google-initiated project, Safari support is important to its success. Google engineers were among those seriously concerned about the new Safari policy change. The phrase “script-writable storage” in the context of only seven-days unused lifespan was initially thought to threaten that success.

It may not be the intention of ITP to delete data in first party relationships which includes PWA home screen apps. It is, however, their intention to further crimp down default privacy in Safari to only really enable a robust first-party relationship, clearing out all unused data after a 7-day counter. At least now marketers know how their days are numbered in Safari.

The post What Safari’s 7-day cap on script-writeable storage means for PWA developers appeared first on Search Engine Land.

Coding for SEO: Using JavaScript to track COVID-19 /coding-for-seo-using-javascript-to-track-covid-19-331209 Wed, 01 Apr 2020 12:00:00 +0000 /?p=331209 An explanation of asynchronous JavaScript using Fetch API for getting JSON data to the screen.

The post Coding for SEO: Using JavaScript to track COVID-19 appeared first on Search Engine Land.

Alongside tanking search rankings, you may encounter a Google Search Console warning about First Input Delay. After profiling page performance using Dev Tools (performance tab, CPU chart), you might find JavaScript is consuming time on the CPU main thread. That JavaScript probably contains widely supported, but older-style, XML HTTP Request (XHR) for a network resource.

Modern JavaScript doesn’t have to lock up the CPU main thread the way XHR does. New patterns also make it easy to request and manage JSON data. Let’s see how it can be done with a simple script that fetches COVID-19 statistics to display in your browser.

Copy the source code above and paste it into a text editor. Save it as a local file on your desktop (or somewhere you’ll remember). Make sure to give it the .html extension so that it opens easily using your web browser. Double-click the new file, or navigate to it using your browser’s [File]→[Open File] menu item.

Provided you’re connected to the Internet, you should see statistics appear on the page. Our script fetches JSON data from a source managed by several journalists and unpacks it to update your page DOM. Refresh the page or visit it periodically to track the latest statistics.

Assigning functions as values

When considering this code, skip over the opening HTML to look at what’s happening inside the script tag. The first line of JavaScript is a statement that assigns a complex value — a function — to the name “getCovidStats.”

const getCovidStats = async() => {

The assignment value is everything between the first equal sign after “getCovidStats” and the semicolon on line 28: async() => { ...};

The semicolon here terminates the value assignment.

Since the code declares an async() function, when we reference getCovidStats, we’re going to need to do so in the form for calling functions for it to work. We need to use parentheses as you see them on line 29: getCovidStats();

Notice how we named getCovidStats() after what the function does? It’s a good idea to name functions after their primary action — labeling what our functions do — as a general naming convention. This makes it easier for future programmers to follow our code. It also makes it easier for you to follow your own code if you return to it months or years down the road.

A note about constants

The const keyword is shorthand for ‘constant,’ which is a general programming notion for a value that should not change. In this case, getCovidStats is a constant.

JavaScript constants cannot be reassigned or redeclared without errors. JavaScript provides error messages as guard rails for us to avoid buggy code. The condition forces us to use our constants properly.

When you’re writing constant values you “hard-code” values to the variable names as if chiseling them in stone. We often write functions as constant variables because we don’t want our functions to behave unpredictably.

The async() function

Don’t sweat the => arrow — the body of our async() function is bounded by a set of curly braces: {}, opening on line 9 and closing on line 28. Notice the nested try {}, catch {}, and finally {} methods which are also defined using curly braces as bounds. The code inside curly braces is what will run when these functions or methods are called.

They are called here sequentially.


Our async() function introduces a special new keyword used in the try block: await. When we use async() { await { code } } it implies a JavaScript ‘Promise’ will eventually return at some future point. Using await won’t block processing other tasks. Other code can run while at this point we ‘await’ the return of a Promise.

Fetch API

We’re using fetch() which returns a Promise object for the URL we designate as a required argument. Network methods such as fetch() with Promises make good use cases, because network connectivity is totally unpredictable. We don’t want our program to stop waiting for an immediate response. We want to move on and count on the fact the Promise will return down the line.

A note about browser support

The Fetch API and async() functions with await are modern JavaScript patterns that aren’t always supported by old browsers, particularly non-Edge IE browsers. Heed the warning about writing code for your target market.

If you need to support old browsers then look into a deployment strategy that offers advanced features with polyfills or can gracefully decline to your lowest denominator browser.

Our covid-19 data source

The source we use is kept up by several journalists working for major news organizations. It offers us JSON responses to simple GET requests. That is ideal for our purposes.

There are scattered data sources supporting International sites that produce amazing visualizations of the outbreak. Looking at what these use for data and trying to find an open API can lead to PDF (WHO situation reports) or CSV (John Hopkins), but for simplicity here we want JSON.

The post Coding for SEO: Using JavaScript to track COVID-19 appeared first on Search Engine Land.

Coding for SEO 101: Understanding source code, compressed code and compiled code /coding-for-seo-101-understanding-source-code-compressed-code-and-compiled-code-331123 Tue, 31 Mar 2020 14:30:00 +0000 /?p=331123 Why can't computers read human language? Why do some source files look like crazy character noise? Are computer programmers magicians?

The post Coding for SEO 101: Understanding source code, compressed code and compiled code appeared first on Search Engine Land.

There are loads of coding for beginners resources out there, but often they don’t actually start at the very beginning. Here we’re going to look at common roadblocks encountered by beginners trying to learn to code.

You may know that source code is almost always just text files written using a computer language ‘syntax,’ which amounts to a set of instructions for the computer.

The common language that both humans and computers understand is mathematics. If you don’t initially think of math as a language, then remember that morse code transmits human language using a syntax that could easily be described in terms of mathematics.

Computers understand mathematical systems.

Why do some source files look like crazy character noise? Good programmers write source code that looks logically organized. It just gets transformed through processing. If you open a file that you can’t immediately read, you may be looking at compressed data, binary code, or source code that has been reduced or ‘minified’ by removing unnecessary white space.

Minified Source Code

This last case is probably what you see most often when you use the ‘View Source’ feature of your web browser. Think about this article and its text. Think about how it would look if we removed all the spaces between all the words. You could probably read it, but there would be troublesome spots and it would take much longer. Spaces are pretty necessary. A minifying procedure wouldn’t remove necessary space.

No Spaces
Last paragraph with no spaces

What  if  the  style  guide for this sentence  requires  double-space? Two spaces between words in article writing are not an absolute necessity but they make it easier for human readers. In these cases, a minifying process for efficient transmission across great distances could remove one redundant space in order to reduce the total file size.

Programmers Space Things Out

Double-spaced text is easier to read and computer programmers use a lot of extra white space for precisely that reason. Computer source code is harder to read than plain text, and therefore we use far more whitespace than even a double-spaced article would. Whitespace is how programmers structure Python code, for example.

Sometimes we use 2, 4, or 8 spaces in a row to simulate tab characters, and sometimes we use the tab characters themselves. We use carriage-return ‘characters’ (the notion of a carriage-return is from our old typewriter days). The computer simulates carriage returns which allows us to use the ‘return’ character (or newline) as whitespace in order to organize our code and make it easier to read.

How we organize our code with white space is usually dictated by some sort of personal, traditional, or company-required logic so that humans can read our instructions before they get compressed or get translated into machine code by a compiler.

These alternate forms of text are much harder, or even impossible, to read. When text is minified, you can usually figure out what simple code is doing, even though it’s more difficult to read when extra whitespace has been removed. When you’re looking at a text file that has been compressed, however, it is completely obfuscated.

File Compression

Compression nearly suffices as a sort of crude (not secure) cryptography. Compression algorithms use mathematical formulas along with a table (or crosswalk/dictionary) to substitute for characters and their positions throughout an original text.

Compressed (Zip) file
Compressed (Zip) file

When you decompress a file, the computer uses that table in combination with the generated formulas in reverse to restore an original text.

Uncompressed Zip file
Viewing a Zip Compressed file (as uncompressed by Vim)

Compiled Source Code

Ultimately, when we’re writing computer programs, we’re writing programs that need to be processed by a CPU. When we write (client-side) JavaScript, our instructions need to get ‘interpreted’ by the browser and translated into machine code for the user’s CPU to process. That’s why JavaScript can crash your browser (and why Google measures the CPU load of the scripts you write).

Compiled source code starts as text files. Text is then transformed into machine code instructions by a corresponding compiler for performance boosts over code that is otherwise interpreted at run time. When you open machine code binaries, you’re going to have a hard time understanding any of it. That’s because it’s streamlined code for computer processing and is not in a form that any of us should open.

Binary File
Binary file (machine code for the cat program)

In summary, there are three ways you might see computer code noise that looks totally arcane:

  1. Minified source code.
  2. Compressed files (source code or other media).
  3. Compiled machine code (binaries or possibly assembly language).

Of all these, only assembly language is anything a computer programmer might write. If you’re writing code in assembly language, then you’re probably a magician. At some point in your journey you may end up writing something like Assembly or Perl that, to the ordinary eye, still looks like a bunch of crazy noise.

The post Coding for SEO 101: Understanding source code, compressed code and compiled code appeared first on Search Engine Land.