Tag Archives: search engine optimization

Search engine optimization (SEO) strategies with Umbraco 7.1.0

FAQ and Glossary page SEO strategies with Umbraco, Angular, and dynamic site content.

SEO issues with Angular

In this article, I will discuss implementing search engine optimization strategies (SEO) with Umbraco 7.1.0. I recently learned from http://www.uwestfest.com/ that AngularJS introduces new SEO challenges.  In Angular, code elements are wrapped in double curly braces ({{…}}). Google does not execute JavaScript when indexing website content. Google parses values in the double curly braces ({{ … }}) without actually interpreting what is located inside the curly braces. This means that Angular is great for content that is not intended for indexing, but for content you want people to discover via search engines, you need to use some strategy. One technique is to perform server-side rendering by running the application in a headless browser (e.g. PhantomJS). For best performance, we must pre-load the content into html files through a Node.js backend. We can then direct the search engine to use the output file instead of our actual page. Yes, it is somewhat of a hack, but until we have something like Rendr that works with Angular, we may not have much choice.

Sending Google snapshots of html that has been processed by your JavaScript

Once a sitemap.xml or robots.txt file is created, you can use grunt-html-snapshots to snapshot the files.

“How do we grab the results and sent them to Google whenever the search engine requests access, though?”

To do this, we must use some special URL routing by following conventions: https://developers.google.com/webmasters/ajax-crawling/.

Here is more information on how this is done: http://www.yearofmoo.com/2012/11/angularjs-and-seo.html.

Here is another good article on how to create and use the HTML snapshots:

https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshot

How to make sure your dynamic content is indexed

This snapshotting technique also can apply towards content that is dynamic in nature, such as content that is rendered by a database. For best results, it is important to design the infrastructure in a way that enables the content that is displayed to depend upon the URL provided. This strategy operates best with RESTful content (referring to the RESTful architectural principle) and is very compatible with web design patterns, such as Model-View View-Model (MV-VM) or Model-View-Controller (MVC).

Future for Umbraco

Umbraco, particularly Umbraco 7.1+, is an amazing platform, and it is changing the rules of the game. In the future, I will be posting more information on how to sync static nodes in Umbraco with a SQL Server database. In the past, I have written on the techniques required to obtain very good SEO results. While these techniques are very useful in theory, developers are often looking for more concrete implementations to better understand abstract concepts.

Grounding SEO theory with concrete strategies

To this effect, I offer two suggestions:

  1. FAQ pages
  2. Glossary

These two techniques are easy to implement but very hard to master. I will speak about both of these briefly.

FAQ Pages

FAQ pages are similar to glossary pages.  These pages are designed to grab a cluster of keyword phrases that originate from the base keyword phrase.

Process of developing FAQ pages

Several years ago, a consultant from Webtrends suggested that I follow this pattern:

  1. Start with a set of keyword phrases that you want to target.
  2. Choose one or two keywords that you want to form the base of your targeting.
  3. Choose 30-50 keyword phrases that contain your base keyword.
  4. Construct questions from these keyword phrases. Ensure that the keywords are contained in the questions.
  5. Create a web page for each phrase. Place the keywords in these html tags:
  • Meta keywords tag
  • Meta description
  • Title
  • H1 tag
  • body text (paragraph tag)

 

How to prevent the site from appearing like spam to search engines

Ensure that you apply some variation in how you use your keywords. Otherwise, your site content may appear like spam to Google. For example, in your H1 tag, use the keywords in a sentence. Be sure to answer the question to the best of your ability.

Proof of SEO concept

With this strategy, I was able to land quite a few search pages in the top 10 Google results for this website:

http://faq.liceoff.com

If you check out this website, be sure to notice that the base keyword “lice” is located in the domain name and the keyword “faq” is located in the sub-domain. Having “faq” as a sub-domain (i.e. CNAME or canonical) informs search engines that this URL points to FAQ pages.

 

Glossary pages

Glossary pages attract a very specific type of user, so it is very important to consider how the person will be using your site. Glossary pages are best when you are trying to provide a resource for people that will frequently refer back to your site. These pages tend to obtain high bounce rates, but if done correctly, they also tend to cause individuals to repeatedly come back to your site. The goal here is not necessarily to obtain an instant conversion. Instead, your goal is to provide a valuable informational resource to people on a particular subject. As people land on your site, they quickly obtain the desired information and usually leave. However, with some strategy, you can still convert these visitors into people that explore your site in greater depth.

Strategy for converting visitors from glossary pages

However, to effectively design glossary pages, it is critical to also offer provide internal links to additional articles for interested readers. This allows casual readers to get desired information and leave while also providing additional resources for more interested readers.  This technique also allows us you to track conversions as people that click on that particular link. The best way to create glossary pages is to create one glossary page for each keyword or keyword phrase that you are targeting. Here is one of Google’s examples of a glossary.

Advertisements

Building an architecture for innovation

Architectural techniques for improving search engine rankings

Devin Bost

March 8, 2014

One of the great challenges for website developers is providing more content with less time. As languages, frameworks, and platforms continually evolve and become more advanced, it can be easy for a developer to feel somewhat lost among the myriad of options available. Questions that a developer might be faced with are, “How should I spent my time? What should I be learning now? And how will I know if I’m moving in the right direction?” Faced with many technological challenges, it can sometimes seem overwhelming for a developer to sacrifice the time required to add content to the website. When I say “content,” I am referring to articles, FAQs, and other information that will be attractive to search engines and visitors of the website.  Time focused on adding content may take away from time that could have been spent improving the infrastructure. Even after massive time spent writing articles, it is sometimes humbling to see your website only barely begin to obtain modest positions on various searches across the internet. Suddenly, when the web developer realizes that they need to spent time on infrastructural changes, such as preventing spam from filling up their contact forms, it can be very disappointing to see search engine rankings rapidly drop. So, you may ask, “What techniques are available to obtain sustainable growth of the numbers of visitors that find my website?” To this question, there is one ultimate answer: architecture.

I will address this topic by starting with a parable. Consider two architects, not software architects, but the kind that construct buildings. The first architect rushes through the design, obtains capital through loans, and quickly leverages available resources to start constructing the beams and walls of the building. The second architect spends much more time in the early design stages. The second architect ensures that every pipe, every room, every door, and every last possible detail are considered. The second architect performs a thorough evaluation of the climate, soil, and risks of major disaster. The second architect doesn’t begin building until absolutely certain that the building has a firm foundation, a foundation that will not fall. Now let’s jump ahead in time. The first architect has constructed nearly half of the building, but he now realizes that there is a problem with the layout of some of the plumbing. To continue with construction, the top half of the building will need to be redesigned. The amount of money required to perform these design changes will be massive, and without starting over, certain structural consequences of the redesign will render the building weak to natural disaster. This building has been built on a sandy foundation. The second architect, however, was much more careful. Every decision was made with the utmost analysis and planning. As a consequence, the foundation was constructed with the future in mind. The construction was able to occur with much greater organization and cost savings. Once the construction gained momentum, milestones began occurring ahead of schedule. The second architect’s building was built on a sure foundation.

Software architecture has many similarities to the architecture involved in commercial buildings. With a good software framework, the developer may accomplish much. I have seen many developers write web applications in PHP only to later realize that it would be very complicated to build software applications that interoperate seamlessly with their website code. Nonetheless, the key to obtaining sustainable rankings in major search engines depends not upon the content you deliver, but instead upon your ability to allow your users to create content. Think about some of the largest and most successful websites on the internet. How many of them became successful without providing an interactive service? The best software architecture is designed with usability in mind. Is your website simply acting like a billboard? Or does your website provide a service that your users need to have? You must consider what the motivations of your users will be. Many websites want to tell their users what to do and what to believe. But far fewer are the websites that listen to their users and act upon what their users say. Take some time and think about the ten most successful websites or web technologies that you have ever seen. How much of their content is generated by their users?

This architecture is the key to obtaining sustainable search engine rankings. Without good architecture, your site is built upon a sandy foundation.

 

Search Engine Optimization techniques

Devin Bost

2/23/2014

One may ask, “How do we measure the results of our search engine marketing/optimization?” Measuring data requires Google Analytics. In this article, we will assume that Google Analytics has already been configured. According to what data Google Analytics provides, the process for improving site metrics is as follows:

  1. First, we setup filters.  We use filters to isolate traffic in the following areas (in order of importance from highest to lowest):
    1. Organic search results;
    2. Paid search results (this only applies when using pay per click advertising with Google AdWords);
    3. Unique new visitors from non-search engine sources.
  2. Second, we set our metrics (external variables). We should setup different metrics for each advertising campaign. We should also setup different metrics for organic search traffic. There are several benchmarks (variables) I like to collect data for:
    1. Number of unique visitors, or unique visitor count;
    2. Unique page view count, partitioned by ;
    3. Hit count, partitioned by landing page URL (filtered to display only pages generating one or more unique visits);
    4. Hit count, partitioned first by keyword phrase (the search term used to land on a page); then, partitioned by landing page URL (the URL the search brought them to);
    5. Relative position of ranked pages on Google, weighted according to their position (with an exponential decay model I developed);
    6. Return visit count, partitioned by IP address;
    7. Bounce rates:
      1. Partitioned by keyword phrase, then landing page URL, then by number of internal links (aka layer count) clicked on;
      2. Partitioned by landing page URL, then keyword phrase;
    8. Visitor count, partitioned by backlink URL. These are visitors that landed on our site by following a link from someone else’s website, and according to (Brin & Page, 1998), backlinks have been important since the creation of Google’s search algorithm.
  3. Third, we set our internal variables. These are what we generate internally. This technique becomes invaluable once our external variables begin exhibiting acceleration; then, we may use mathematical techniques to gain insights into how our changes to page content (internal variables) affect our external variables. It is very important that changes to site content are tracked. It can become very hard to assess rankings when it is unclear which version of a particular page was responsible for obtaining a top ranking. For this reason, it is very important that revision control is tracked across the site. HTML tags must be analyzed and tracked. Here are descriptions of how these are used:
    1. Title tag: It defines the page title and it communicates what the page is about to the search engines. The target keyword must be included in this tag. It is displayed to Google search users, so it is important that we apply some practical psychology here;
    2. Description meta tag: It provides a summary description of the web page and its contents. Also, this description appears (in most cases) in Google search results, just below the title; target keyword must be included in this tag;
    3. URLs: An optimized URL is one that is self-explanatory and self-documenting; target keyword must be included in this tag;
    4. Heading tags: These tags are used to emphasize important text and a hierarchical structure of keywords on the web page. Heading tags also inform the search engines how the page content is organized. The <h1> tag defines the most important keywords and <h6> defines the least important keywords;
    5. Keyword placement: This data will be more relevant when we start clustering keywords for strategic optimization on keyword stems. There are several techniques which may be used on this data, depending on how we implement clustering. We can use neural networks and natural language processing for this later on. Using language processing techniques is very easy when content is stored in a database that offers out-of-the-box text processing features;
    6. Content keyword density: According to (Parikh & Deshmukh, 2013), search algorithms place great emphasis on keyword density. It is important that targeted keywords have greater density in the content involved;
    7. Use of robots.txt: The robots.txt file gives directions to search engines regarding which pages or directories should be crawled. Having this file configured correctly will make sure all optimized pages will get indexed;
    8. Images: Using the image alt attribute to provide an accurate description of the image being used; target keyword should be used in the description, if possible. The alt attribute from the <img> tag specifies the alternate text of what the image contains if the image is displayed incorrectly or doesn’t load. It is also used by content readers for people with disabilities;
    9. Use of the “rel=nofollow” attribute: In a HTML anchor tag <a>, the rel attribute defines the relationship between the current page and the page being linked to by the anchor. The nofollow value signals web spiders not to follow the link. In other words, it tells Google that your site is not passing its reputation to the page or pages linked to;
    10. Sitemaps: Keeping the sitemap updated is key for good site rankings. Search engines depend upon sitemaps to tell them what the current web pages are for the website;
    11. Time interval. This is the frequency by which we take measurements. Monthly is fine initially. Once we have enough data to observe our rates of change, we can change our interval to weekly;
  4. We will track internal links and external links once we have traffic that doesn’t bounce. We will discuss this more later. External linking is considered off-site SEO. Important factors, although rather difficult to track are:
    1. Keyword in the backlink: Google’s ranking algorithm places high value on the text that appears within the link. The text within the link gets associated with the page and describes the page it links to. For this reason, it’s important to have the target keyword within the text of the backlink;
    2. Gradual link-building: It’s important to build backlinks in a gradual manner. The link-building process should be natural and steady. It is for this reason that SEO takes a lot of work and patience to implement Furthermore, it’s an intentional part of Google’s strategy for gradual reputation building that it not be quick or overnight. In fact, if a site were to acquire dozens or hundreds of backlinks overnight, Google would almost certainly consider this a red flag (spam) that most likely will get your site penalized. But if the site content is compelling, people can find it through search (or through other means) and link to it. If this occurs, the site owner has no control of the amount of backlinks that the site will generate, and certainly Google can detect that these backlinks weren’t intentional.
    3. Writing articles to establish domain authority: Writing articles and getting them published on other reputable sites, is a strategy that can help your site get backlinks. Getting an article published on trusted sites such as About.com, Wikipedia.org or NewYorkTimes.com and getting a backlink in return, will help increase your website’s reputation and achieve higher rankings;
    4. Personal networking to establish a reputation: It is recommended that we make efforts to reach out to those in the site’s community, particularly sites, “that cover topic areas similar to [ours]. Opening up communication with these sites is usually beneficial.” (www.google.com/webmasters/docs/search-engine-optimization-starter-guide.pdf) Contacting sites that are related to what your site is about is a great way to network, promote and increase your site’s exposure;
    5. Finding your website’s natural affinity group: Find websites that are related or cover similar topics as yours for potential networking opportunities. Getting backlinks off-topic sites do not count as much as links from sites that have related content to yours.

References

Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual Web search engine. Computer networks and ISDN systems, 30(1), 107-117.

Parikh, A., & Deshmukh, S. (2013, November). Search Engine Optimization. International Journal of Engineering Research and Technology, 2(11), 3146-3153.