Posted by lkolowich
You just ran what you thought was a really promising conversion test. In an effort to raise the number of visitors that convert into demo requests on your product pages, you test an attractive new redesign on one of your pages using a good ol’ A/B test. Half of the people who visit that page see the original product page design, and half see the new, attractive design.
You run the test for an entire month, and as you expected, conversions are up — from 2% to 10%. Boy, do you feel great! You take these results to your boss and advise that, based on your findings, all product pages should be moved over to your redesign. She gives you the go-ahead.
But when you roll out the new design, you notice the number of demo requests goes down. You wonder if it’s seasonality, so you wait a few more months. That’s when you start to notice MRR is decreasing, too. What gives?
Turns out, you didn’t test that page long enough for results to be statistically significant. Because that product page only saw 50 views per day, you would’ve needed to wait until over 150,000 people viewed the page before you could achieve a 95% confidence level — which would take over eight years to accomplish. Because you failed to calculate those numbers correctly, your company is losing business.
Miscalculating sample size is just one of the many CRO mistakes marketers make in the CRO space. It’s easy for marketers to trick themselves into thinking they’re improving their marketing, when in fact, they’re leading their business down a dangerous path by basing tests on incomplete research, small sample sizes, and so on.
But remember: The primary goal of CRO is to find the truth. Basing a critical decision on faulty assumptions and tests lacking statistical significance won’t get you there.
To help save you time and overcome that steep learning curve, here are some of the most common mistakes marketers make with conversion rate optimization. As you test and tweak and fine-tune your marketing, keep these mistakes in mind, and keep learning.
Equating A/B testing with CRO is like calling a square a rectangle. While A/B testing is a type of CRO, it’s just one tool of many. A/B testing only covers testing a single variable against another to see which performs better, while CRO includes all manner of testing methodologies, all with the goal of leading your website visitors to take a desired action.
If you think you’re “doing CRO” just by A/B testing everything, you’re not being very smart about your testing. There are plenty of occasions where A/B testing isn’t helpful at all — for example, if your sample size isn’t large enough to collect the proper amount of data. Does the webpage you want to test get only a few hundred visits per month? Then it could take months to round up enough traffic to achieve statistical significance.
If you A/B test a page with low traffic and then decide six weeks down the line that you want to stop the test, then that’s your prerogative — but your test results won’t be based on anything scientific.
A/B testing is a great place to start with your CRO education, but it’s important to educate yourself on many different testing methodologies so you aren’t restricting yourself. For example, if you want to see a major lift in conversions on a webpage in only a few weeks, try making multiple, radical changes instead of testing one variable at a time. Take Weather.com, for example: They changed many different variables on one of their landing pages all at once, including the page design, headline, navigation, and more. The result? A whopping 225% increase in conversions.
When you read that line about the 225% lift in conversions on Weather.com, did you wonder what I meant by “conversions?”
If you did, then you’re thinking like a CRO.
Conversion rates can measure any number of things: purchases, leads, prospects, subscribers, users — it all depends on the goal of the page. Just saying “we saw a huge increase in conversions” doesn’t mean much if you don’t provide people with what the conversion means. In the case of Weather.com, I was referring specifically to trial subscriptions: Weather.com saw a 225% increase in trial subscriptions on that page. Now the meaning of that conversion rate increase is a lot more clear.
But even stating the metric isn’t telling the whole story. When exactly was that test run? Different days of the week and of the month can yield very different conversion rates.
For that reason, even if your test achieves 98% significance after three days, you still need to run that test for the rest of the full week because of how different conversion rate can be on different days. Same goes for months: Don’t run a test during the holiday-heavy month of December and expect the results to be the same as if you’d run it for the month of March. Seasonality will affect your conversion rate.
Other things that can have a major impact on conversion rate? Device type is one. Visitors might be willing to fill out that longer form on desktop, but are mobile visitors converting at the same rate? Better investigate. Channel is another: Be wary of reporting “average” conversion rates. If some channels have much higher conversion rates than others, you should consider treating the channels differently.
Finally, remember that conversion rate isn’t the most important metric for your business. It’s important that your conversions are leading to revenue for the company. If you made your product free, I’ll bet your conversion rates would skyrocket — but you wouldn’t be making any money, would you? Conversion rate doesn’t always tell you whether your business is doing better than it was. Be careful that you aren’t thinking of conversions in a vacuum so you don’t steer off-course.
One of the biggest mistakes I made when I first started learning CRO was thinking I could rely on what I remembered from my college statistics courses to run conversion tests. Just because you’re running experiments does not make you a scientist.
Statistics is the backbone of CRO, and if you don’t understand it inside and out, then you won’t be able to run proper tests and could seriously derail your marketing efforts.
What if you stop your test too early because you didn’t wait to achieve 98% statistical significance? After all, isn’t 90% good enough?
No, and here’s why: Think of statistical significance like placing a bet. Are you really willing to bet on 90% odds on your test results? Running a test to 90% significance and then declaring a winner is like saying, “I’m 90% sure this is the right design and I’m willing to bet everything on it.” It’s just not good enough.
If you’re in need of a statistics refresh, don’t panic. It’ll take discipline and practice, but it’ll make you into a much better marketer — and it’ll make your testing methodology much, much tighter. Start by reading this Moz post by Craig Bradford, which covers sample size, statistical significance, confidence intervals, and percentage change.
Just because something is doing well doesn’t mean you should just leave it be. Often, it’s these marketing assets that have the highest potential to perform even better when optimized. Some of our biggest CRO wins here at HubSpot have come from assets that were already performing well.
I’ll give you two examples.
The first comes from a project run by Pam Vaughan on HubSpot’s web strategy team, called “historical optimization.” The project involved updating and republishing old blog posts to generate more traffic and leads.
But this didn’t mean updating just any old blog posts; it meant updating the blog posts that were already the most influential in generating traffic and leads. In her attribution analysis, Pam made two surprising discoveries:
Why? Because these were the blog posts that had slowly built up search authority and were ranking on search engines like Google. They were generating a ton of organic traffic month after month after month.
The goal of the project, then, was to figure out: a) how to get more leads from our high-traffic but low-converting blog posts; and b) how to get more traffic to our high-converting posts. By optimizing these already high-performing posts for traffic and conversions, we more than doubled the number of monthly leads generated by the old posts we’ve optimized.
Another example? In the last few weeks, Nick Barrasso from our marketing acquisition team did a leads audit of our blog. He discovered that some of our best-performing blog posts for traffic were actually leading readers to some of our worst-performing offers.
To give a lead conversion lift to 50 of these high-traffic, low-converting posts, Nick conducted a test in which he replaced each post’s primary call-to-action with a call-to-action leading visitors to an offer that was most tightly aligned with the post’s topic and had the highest submission rate. After one week, these posts generated 100% more leads than average.
The bottom line is this: Don’t focus solely on optimizing marketing assets that need the most work. Many times, you’ll find that the lowest-hanging fruit are pages that are already performing well for traffic and/or leads and, when optimized even further, can result in much bigger lifts.
When it comes to CRO, process is everything. Remove your ego and assumptions from the equation, stop relying on individual tactics to optimize your marketing, and instead take a systematic approach to CRO.
Your CRO process should always start with research. In fact, conducting research should be the step you spend the most time on. Why? Because the research and analysis you do in this step will lead you to the problems — and it’s only when you know where the problems lie that you can come up with a hypothesis for overcoming them.
Remember that test I just talked about that doubled leads for 50 top HubSpot blog posts in a week? Nick didn’t just wake up one day and realize our high-traffic blog posts might be leading to low-performing offers. He discovered this only by doing hours and hours of research into our lead gen strategy from the blog.
Paddy Moogan wrote a great post on Moz on where to look for data in the research stage. What does your sales process look like, for example? Have you ever reviewed the full funnel? “Try to find where the most common drop-off points are and take a deeper dive into why,” he suggests.
Here’s an (oversimplified) overview of what a CRO process should look like:
As you go through these steps, be sure you’re recording your hypothesis, test methodology, success criteria, and analysis in a replicable way. My team at HubSpot uses the template below, which was inspired by content from Brian Balfour’s online Reforge Growth programs. We’ve created an editable version in Google Sheets here that you can copy and customize yourself.
Don’t forget the last step in the process: Conduct a follow-up experiment. What can you refine for your next test? How can you make improvements?
One of the most important pieces of advice I’ve ever gotten around CRO is this: “A test doesn’t ‘fail’ unless something breaks. You either get the result you want, or you learned something.”
It came from Sam Woods, a growth marketer, CRO, and copywriter at HubSpot, after I used the word “fail” a few too many times after months of unsuccessful tests on a single landing page.
What he taught me was a major part of the CRO mindset: Don’t give up after the first test. (Or the second, or the third.) Instead, approach every test systematically and objectively, putting aside your previous assumptions and any hope that the results would swing one way or the other.
As Peep Laja said, “Genuine CROs are always willing to change their minds.” Learn from tests that didn’t go the way you expected, use them to tweak your hypothesis, and then iterate, iterate, iterate.
I hope this list has inspired you to double down on your CRO skills and take a more systematic approach to your experiments. Mastering conversion rate optimization comes with a steep learning curve — and there’s really no cutting corners. You can save a whole lot of time (and money) by avoiding the mistakes I outlined above.
Have you ever made any of these CRO mistakes? Do you have any CRO mistakes to add to the list? Tell us about your experiences and ideas in the comments.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Posted by alexis-sanders
When creating a modern web page, there are three major components:
A common use of AJAX is to update the content or layout of a webpage without initiating a full page refresh. Normally, when a page loads, all the assets on the page must be requested and fetched from the server and then rendered on the page. However, with AJAX, only the assets that differ between pages need to be loaded, which improves the user experience as they do not have to refresh the entire page.
One can think of AJAX as mini server calls. A good example of AJAX in action is Google Maps. The page updates without a full page reload (i.e., mini server calls are being used to load content as the user navigates).
As an SEO professional, you need to understand what the DOM is, because it’s what Google is using to analyze and understand webpages.
The DOM is what you see when you “Inspect Element” in a browser. Simply put, you can think of the DOM as the steps the browser takes after receiving the HTML document to render the page.
The DOM is what forms from this parsing of information and resources. One can think of it as a structured, organized version of the webpage’s code.
Headless browsing is simply the action of fetching webpages without the user interface. It is important to understand because Google, and now Baidu, leverage headless browsing to gain a better understanding of the user’s experience and the content of webpages.
Are bots able to find URLs and understand your site’s architecture? There are two important elements here:
The easiest way to solve this problem is through providing search engines access to the resources they need to understand your user experience.
!!! Important note: Work with your development team to determine which files should and should not be accessible to search engines.
Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.
All of these studies are amazing and help SEOs understand when to be concerned and take a proactive role. However, before you determine that sitting back is the right solution for your site, I recommend being actively cautious by experimenting with small section Think: Jim Collin’s “bullets, then cannonballs” philosophy from his book Great by Choice:
“A bullet is an empirical test aimed at learning what works and meets three criteria: a bullet must be low-cost, low-risk, and low-distraction… 10Xers use bullets to empirically validate what will actually work. Based on that empirical validation, they then concentrate their resources to fire a cannonball, enabling large returns from concentrated bets.”
Consider testing and reviewing through the following:
After you’ve tested all this, what if something’s not working and search engines and bots are struggling to index and obtain your content? Perhaps you’re concerned about alternative search engines (DuckDuckGo, Facebook, LinkedIn, etc.), or maybe you’re leveraging meta information that needs to be parsed by other bots, such as Twitter summary cards or Facebook Open Graph tags. If any of this is identified in testing or presents itself as a concern, an HTML snapshot may be the only decision.
HTML snapshots are a fully rendered page (as one might see in the DOM) that can be returned to search engine bots (think: a static HTML version of the DOM).
Google introduced HTML snapshots 2009, deprecated (but still supported) them in 2015, and awkwardly mentioned them as an element to “avoid” in late 2016. HTML snapshots are a contentious topic with Google. However, they’re important to understand, because in certain situations they’re necessary.
When considering HTML snapshots, you must consider that Google has deprecated this AJAX recommendation. Although Google technically still supports it, Google recommends avoiding it. Yes, Google changed its mind and now want to receive the same experience as the user. This direction makes sense, as it allows the bot to receive an experience more true to the user experience.
A second consideration factor relates to the risk of cloaking. If the HTML snapshots are found to not represent the experience on the page, it’s considered a cloaking risk. Straight from the source:
“The HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking.”
– Google Developer AJAX Crawling FAQs
Despite the considerations, HTML snapshots have powerful advantages:
When browsers receive an HTML document and create the DOM (although there is some level of pre-scanning), most resources are loaded as they appear within the HTML document. This means that if you have a huge file toward the top of your HTML document, a browser will load that immense file first.
The concept of Google’s critical rendering path is to load what the user needs as soon as possible, which can be translated to → “get everything above-the-fold in front of the user, ASAP.”
Critical Rendering Path – Optimized Rendering Loads Progressively ASAP:
!!! Important note: It’s important to understand that scripts must be arranged in order of precedence. Scripts that are used to load the above-the-fold content must be prioritized and should not be deferred. Also, any script that references another file can only be used after the referenced file has loaded. Make sure to work closely with your development team to confirm that there are no interruptions to the user’s experience.
Read more: Google Developer’s Speed Documentation
Thanks: Thank you Max Prin (@maxxeight) for reviewing this content piece and sharing your knowledge, insight, and wisdom. It wouldn’t be the same without you.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Editor’s note: As part of our series of interviews with entrepreneurs across Asia Pacific who use the internet to grow their business, we caught up with Lucky D. Aria, the founder of Matoa, to find out how he went from working in a cookie factory to starting his own watchmaking enterprise. Matoa now exports their watches made from reclaimed wood to Europe, Japan, Malaysia, Singapore and the U.S.
Tell us about your journey to becoming an entrepreneur.
Seven years ago, I was a high school graduate working at a small cookie company in Bandung. At the time I had a monthly salary of $75. I would get an extra commission during Hari Raya (Ramadan) and that was the only money I could save. It was tough to make ends meet, so I knew that something had to change. Starting my own company was a risky decision, and everyone advised against leaving a stable job. But I knew I had to take a risk and make a change.
How did you manage to successfully launch Matoa?
I reverse-engineered what others often do: I didn’t want to sell what I made, instead I wanted to make what people would buy. After a lot of research, I saw there was a niche for specialty watches. I started learning about consumer preferences and what they need and want before designing the end product. I had to borrow capital from family and friends because my family couldn’t secure a bank loan since we had nothing to offer as collateral. But that didn’t deter me. I was so happy when I sold my very first watch at a local exhibition in 2011, one year after leaving the cookie factory. And we grew from there.
How have the internet and Google’s tools helped transformed your business?
Last year, exports of Matoa watches made up a third of our sales, so about 3,500 units in total. The internet has changed our lives and how we do business. Now, I can sell my products in every corner of the world using the internet. I have many distributors outside of Indonesia, whom I have not had the chance to meet face-to-face, but we can develop our partnership because we’re online. I truly believe every company can use the Internet to grow their business.
Google AdWords increased my local sales in Indonesia by 160% year-on-year from 2015 to 2016. Prior to AdWords, I faced difficulty in expanding my business—even in Indonesia. Bringing our products to consumers would have required us to set up physical storefronts in every city in Indonesia and this would have been extremely expensive.
What inspires you to continue to grow as an entrepreneur and business owner?
My family’s economic conditions have improved a lot. I own my own house now. I have grown a lot personally. Now I focus on spreading this welfare to my 40 employees, many of whom rely on this company for their livelihood. I can’t afford to disappoint them, and I want to help them grow so one day they can start their own business doing something they are passionate about.
What’s your advice to other entrepreneurs?
If you want to sustain your business, make sure you don’t create a product and push it to the market without first asking “why?”. Ask yourself, “why would consumers want to buy our products?” If you don’t have a good answer to that, you’re not likely to succeed.
What’s next for your business in 2017 and beyond?
In 2017, we launched accessories for smart watches to complement the traditional wooden products we provide, which reflect Indonesia’s cultural heritage. We aim to compete with global brands.
Beyond that, my big vision for Matoa is to continue to grow and develop the business so we can provide more job opportunities to people locally. So far, Matoa has also empowered the livelihoods of 35 families in Ciwidey, a small village in West Java. They help process raw wood materials and handcraft our wooden watches. I’m glad they have gained new skills and can generate a stable income by working with Matoa.
More than 400 million people in India use the internet, and more are coming online every day. But the vast majority of India’s online content is in English, which only 20 percent of the country’s population speaks—meaning most Indians have a hard time finding content and services in their language.
Building for everyone means first and foremost making things work in the languages people speak. That’s why we’ve now brought our new neural machine translation technology to translations between English and nine widely used Indian languages—Hindi, Bengali, Marathi, Gujarati, Punjabi, Tamil, Telugu, Malayalam and Kannada.
Neural machine translation translates full sentences at a time, instead of pieces of a sentence, using this broader context to help it figure out the most relevant translation. The result is higher-quality, more human sounding translations.
Just like it’s easier to learn a language when you already know a related language, our neural technology speaks each language better when it learns several at a time. For example, we have a whole lot more sample data for Hindi than its relatives Marathi and Bengali, but when we train them all together, the translations for all improve more than if we’d trained each individually.
These improvements to Google Translate in India join several other updates we announced at an event in New Delhi today, including neutral machine translation in Chrome and bringing the Rajpal & Sons Hindi dictionary online so it’s easier for Hindi speakers to find word meanings right in search results. All these improvements help make the web more useful for hundreds of millions of Indians, and bring them closer to benefiting from the full value of the internet.
Author: Hally Pinaud
Creating and maintaining buyer personas has been an important task in every role I’ve held as a marketer. Why is that? Personas–when built and used correctly–are a very effective way to channel real empathy for your buyers. That empathy makes it easier to drive winning strategies across the customer lifecycle through campaigns, content, nurture paths, account plans, and sales collateral.
They also happen to be one of the things I speak with our customers about most frequently–hence this blog post! So, whether you’re looking to create your first persona or double-check your approach, here are four things that can limit the impact of your personas:
Have you spoken with your personas lately? No, I’m not talking about some kind of weird, talking-to-a-PDF kind of activity. I mean, have you interviewed the people who would correspond to each persona’s defining factors, specifically to validate that persona? From what I’ve observed, this is one of the most common mistakes when it comes to creating personas.
These “lab grown” personas stem from assuming you know your personas well enough without external validation. Maybe because your organization is pretty open and you have good proximity to prospects and customers. Or maybe you’ve lived in the persona’s shoes yourself (this is a big one–it’s something I struggle with here at Marketo). Lived experiences are valuable, but me, myself and I is a limited and biased sample. Customer and prospect pools are inherently exclusionary.
Luckily, it’s easier to fix than you think: send out some emails and set up some 30-minute interviews. Start with a handful of people–a mix of customers, prospects, and total strangers who look like your persona–and ask them about the details your persona documents. Pro tip: It can be tough to find willing strangers to interview, but a combo of colleagues’ networks, LinkedIn InMails, and $50 Amazon gift cards will get you anywhere.
Hey there, persona hoarder. I see you. You made that great persona and you’re using it to drive your messaging and marketing programs, aren’t you? But have you walked your demand generation team through the persona they’re creating nurture programs for? What about sales or customer success? Have you printed it out so they can tape it to the inside their decks like a Leonardo DiCaprio poster circa 1997? (Always an option.)
Your customer-facing colleagues need to exercise those empathy muscles to do their jobs well. If you aren’t sharing your fresh, validated persona knowledge, they’re going to make it up as they go. So, train and retrain on buyer personas often. Ensure they’re easy to find among your internal content resources and welcome questions, contributions, and ideas from folks who deal with these people each and every day. Personas should make us all better at what we do.
A lot of marketers characterize their personas with photos or names. To be clear, those details can be a good thing. It helps humanize a generalized portrait of your buyer and makes it easier for folks on your team to use a persona as a reference point. For example, “Would Emily the Email Specialist want to read this blog post? What tone would she respond to?” The problem I have is when those details run amok.
Emily has a French Bulldog. She drives a Jeep Liberty. She only reads People Magazine when she gets her hair done.
Really? Do those details help your team make better decisions about how to reach Emily? Maybe, if you sell dog sweaters or hair products. Otherwise, elevate your persona details to focus on what will drive business outcomes and catch yourself before you get carried away on the nitty gritty when it doesn’t.
This is an easy one: update your personas! Revisit them every quarter or two, especially if they’re critical personas like a budget holder or key decision-maker. Yes, we’re busy as marketers, but if your personas haven’t been touched since they were researched during the last Winter Olympics, your hopelessly out-of-date Rip Van Persona might not be helpful anymore. In fact, it may be causing more harm than good–buyers’ challenges, goals, and trusted resources can evolve rapidly in the digital age.
4 Big Mistakes You Might Be Making with Your Marketing Personas was posted at Marketo Marketing Blog – Best Practices and Thought Leadership. | http://blog.marketo.com
Using neural machine translation, we’ve just updated Hebrew and Arabic languages on Google Translate. But what you can’t see on the surface is that these translations also improved thanks to students across Israel. As English as a Foreign Language (EFL) students used the Google Translate Community platform to learn and practice their English, they actually improved translations for everyone in the process.
Adele Raemer is an Israeli English teacher, a trainer for English as a Foreign Language (EFL) and digital pedagogy at the Israel Ministry of Education; she’s also a Google Certified Innovator, a Google Educator Group leader, and blogger.
When Adele first used the Translate Community as tool to teach English, she was impressed by how eager and motivated her students became. She wanted other students to share in the experience, so with the support of the Ministry of Education EFL superintendent and our education team, she turned this into a challenge for classrooms across Israel. The goal was to help students work on their vocabulary, develop critical thinking and translating skills and enhanced their engagement with English studies.
Last spring, 51 classes from across the country joined our Google Translate Community pilot competition. A month later, the class with the highest number of collective contributions joined us for a visit to our Google Israel office. The teachers used the challenge as a fun activity on top of their regular curriculum. As Mazi, an English teacher at “Hodayot” high school, said: “The experience of participating in the competition was very positive and enriched my teaching. Any time that a student finished a task early or had a bit of time at the end of the lesson, they could be productive by going into the site and translating!”
Inspired by the success of Adele’s pilot program, the Translate Community team then built new tools that allowed group contributions and measured results more accurately. With new supporting lesson plans, more than 150 classes participated in a three month competition for Hebrew-English and Arabic-English. From these two competitions, 3,500 students translated and verified more than 4 million words and phrases.
We’ve incorporated this multi-lingual knowledge into the training for our cutting-edge neural technology, which we’ve just launched today for Hebrew and Arabic. That means every one of these contributions helped improve translations for millions of people doing translations to or from these related languages.
We were thrilled to see the great impact that these students had on Translate itself. It’s so cool to see how the next generation of students is working hand in hand with the next generation of machine translation technology!
With AMP project being aggressively pushed by Google recently, site speed topic has been discussed more actively again. In case you wanted to get a quick background of those discussions, here is the timeline of Google trying to make the web faster and dealing with site speed over the years:
December: Google Page Speed report comes to Google Webmaster Tools under “Site Performance” section.
April: Google announces they have started using site speed as one of the ranking signals. At that time, fewer than 1% of search queries were affected by the site speed signal and the signal for site speed only applied for visitors searching in English on Google.com
March: Google announces a new online tool in Google Labs, Page Speed Online, which analyzes the performance of web pages and gives specific suggestions for making them faster. In May, the tool will add a “shiny new” API.
May: Google has announced a new tool for Google Analytics – the Site Speed report. What makes Google Analytics feature so awesome is that page load time numbers are correlated against other numbers, like the bounce rates and percentage of exits from that page allowing you to clearly see how slower pages affect user experience.
November: Site speed suggestions are added to Google Analytics. The new Speed Suggestions report shows the average page load time for top visited pages on your website and integrates with the PageSpeed Insights tool to surface suggestions for improving the pages for speed.
May: Google adds more mobile usability suggestions to their PageSpeed Insights tool.
Feb: Webmasters notice a new label within mobile search results flagging some pages as slow.
April: According to John Mueller, page speed negatively impacts sites that are slower, rather than giving boosts to sites that are faster. Same month Gary Illyes, webmaster trends analyst at Google, says webmasters shouldn’t stress over website speed too much.
June: Google launches a new tool helps test your website’s speed and mobile-friendliness. Google says an average user leaves the site if it doesn’t load on mobile within three seconds.
June: According to Gary Illyes, the speed of your mobile pages currently doesn’t impact your mobile rankings, but soon it may.
November: John Mueller recommends keeping your page load time below 2-3 seconds.
January: Google adds AMP ads. The project aims to make ads load fast again.
To understand how to make your site faster, use my collection of site speed resources.
Have I missed anything? Please share your comments in the comments!