Tag Archives: SEO

Rank building strategy – a foundation for scalability

This is our last article in the series on SEO and web marketing techniques. In this article, we will discuss a strategy for ensuring that you can obtain sustainable growth in the backlinks that visitors create for your site. “The Google PageRank algorithm looks at the pattern of links to your site as they build over time” (http://www.searchengineguide.com/stone-reuning/a-brief-intro-to-link-building-for-small.php). Unfortunately, it is often impossible for a single web developer to update the website with new articles quickly enough to compete with large websites. Review of successful websites reveals that good strategy requires some ingenuity. One may ask, “How would a small team be able to create new content faster than a competitor that has a very large budget and staff? It is even possible for a small group of developers to produce content quickly enough to generate the links required to obtain top search engine rankings and get noticed in this global marketplace?” The answer is actually very simple. The answer is “No, it is not possible for a group of developers to produce content that quickly.” The worried web developer then might reply, “So is all hope lost?” Luckily, the answer to that is also, “No.” But in order to be successful, we must think beyond our own capacities, and we must consider possibilities that we are unable to provide entirely by ourselves. Consider the amazing popularity of successful websites such as Twitter and Facebook. One may ask, “Did the web developers of those sites write all of the content?” In fact, what is unique about those websites is that they merely provided a platform that enabled members of the public to create site content. If we consider the simplicity and rather limited functionality of the original Twitter service, the Twitter developers didn’t even really need to do a whole lot of work. Once they build the core engine, data model, and security infrastructure, they were practically done creating content. At that point, visitors began creating the content, and the project took off like wildfire.

Here’s the vision that Jack Dorsey had when imagining Twitter,

“On May 31st, 2000, I signed up with a new service called LiveJournal. I was user 4,136 which entitled me a permanent account and street cred in some alternate geeky universe which I have not yet visited. I was living in the Sunshine Biscuit Factory in Oakland California and starting a company to dispatch couriers, taxis, and emergency services from the web. One night in July of that year I had an idea to make a more “live” LiveJournal. Real-time, up-to-date, from the road. Akin to updating your AIM status from wherever you are, and sharing it. For the next 5 years, I thought about this concept and tried to silently introduce it into my various projects. It slipped into my dispatch work. It slipped into my networks of medical devices. It slipped into an idea for a frictionless service market. It was everywhere I looked: a wonderful abstraction which was easy to implement and understand.”


By providing a mechanism that allows members to create content, suddenly the role of the developer changes drastically. The developer no longer needs to focus on trying to sell the product. Instead, the developer can focus on adding features to enhancing the user experience and encourage visitors to continue creating content. As members begin to create content, the developer now needs to focus on removing content that violates terms of use (e.g. malicious content, spam, adult content, etc.) rather than trying to create content in the hope that the content will somehow get linked to. And if the platform is truly innovative, your website might even capture attention from major media companies! And those are very valuable backlinks.

Infrastructure is really the key emphasis here. If you create the infrastructure that enables site members with the power to create content for your website, then you deliver value for your site members and also for yourself. You will empower them with tools that help them while simultaneously allowing them to create content for your site. By providing specialized functionality, your site becomes a valuable resource. Valuable sites tend to attract attention. Sites that attract attention tend to grow very quickly when the members are allowed to create content.

A quick note about link brokers:

I highly advise AGAINST using link brokers. Many of these firms are among the most malicious developers on the planet, and they will utilize the most malicious tactics to try and get your links into other peoples’ websites. This is a good way to end up on Google’s blacklist. NEVER BUY LINKS OR DO BUSINESS WITH ANYONE THAT OFFERS TO SELL YOU LINKS. THIS IS AN EXTREMELY DANGEROUS PRACTICE. I have known people that destroyed their hopes and dreams for websites by using these types of services. Once you end up on Google’s blacklist, it is often impossible to remove yourself from it. You can have your web domains blacklisted and become completely banned from using any and all of Google’s services. So why would you even consider doing this? How else do you think people make money from writing viruses and malware? Do you think they all just try to steal credit card numbers? Most people know that banks know how to detect credit card fraud very effectively. So, people try to use computer crime to make money through legitimate business instead. When it comes to “SEO,” you need to be very careful to only do business with people you know you can trust. A simple search for “black hat SEO” will provide you with further information about this subject.


Search engine optimization (SEO) strategies with Umbraco 7.1.0

FAQ and Glossary page SEO strategies with Umbraco, Angular, and dynamic site content.

SEO issues with Angular

In this article, I will discuss implementing search engine optimization strategies (SEO) with Umbraco 7.1.0. I recently learned from http://www.uwestfest.com/ that AngularJS introduces new SEO challenges.  In Angular, code elements are wrapped in double curly braces ({{…}}). Google does not execute JavaScript when indexing website content. Google parses values in the double curly braces ({{ … }}) without actually interpreting what is located inside the curly braces. This means that Angular is great for content that is not intended for indexing, but for content you want people to discover via search engines, you need to use some strategy. One technique is to perform server-side rendering by running the application in a headless browser (e.g. PhantomJS). For best performance, we must pre-load the content into html files through a Node.js backend. We can then direct the search engine to use the output file instead of our actual page. Yes, it is somewhat of a hack, but until we have something like Rendr that works with Angular, we may not have much choice.

Sending Google snapshots of html that has been processed by your JavaScript

Once a sitemap.xml or robots.txt file is created, you can use grunt-html-snapshots to snapshot the files.

“How do we grab the results and sent them to Google whenever the search engine requests access, though?”

To do this, we must use some special URL routing by following conventions: https://developers.google.com/webmasters/ajax-crawling/.

Here is more information on how this is done: http://www.yearofmoo.com/2012/11/angularjs-and-seo.html.

Here is another good article on how to create and use the HTML snapshots:


How to make sure your dynamic content is indexed

This snapshotting technique also can apply towards content that is dynamic in nature, such as content that is rendered by a database. For best results, it is important to design the infrastructure in a way that enables the content that is displayed to depend upon the URL provided. This strategy operates best with RESTful content (referring to the RESTful architectural principle) and is very compatible with web design patterns, such as Model-View View-Model (MV-VM) or Model-View-Controller (MVC).

Future for Umbraco

Umbraco, particularly Umbraco 7.1+, is an amazing platform, and it is changing the rules of the game. In the future, I will be posting more information on how to sync static nodes in Umbraco with a SQL Server database. In the past, I have written on the techniques required to obtain very good SEO results. While these techniques are very useful in theory, developers are often looking for more concrete implementations to better understand abstract concepts.

Grounding SEO theory with concrete strategies

To this effect, I offer two suggestions:

  1. FAQ pages
  2. Glossary

These two techniques are easy to implement but very hard to master. I will speak about both of these briefly.

FAQ Pages

FAQ pages are similar to glossary pages.  These pages are designed to grab a cluster of keyword phrases that originate from the base keyword phrase.

Process of developing FAQ pages

Several years ago, a consultant from Webtrends suggested that I follow this pattern:

  1. Start with a set of keyword phrases that you want to target.
  2. Choose one or two keywords that you want to form the base of your targeting.
  3. Choose 30-50 keyword phrases that contain your base keyword.
  4. Construct questions from these keyword phrases. Ensure that the keywords are contained in the questions.
  5. Create a web page for each phrase. Place the keywords in these html tags:
  • Meta keywords tag
  • Meta description
  • Title
  • H1 tag
  • body text (paragraph tag)


How to prevent the site from appearing like spam to search engines

Ensure that you apply some variation in how you use your keywords. Otherwise, your site content may appear like spam to Google. For example, in your H1 tag, use the keywords in a sentence. Be sure to answer the question to the best of your ability.

Proof of SEO concept

With this strategy, I was able to land quite a few search pages in the top 10 Google results for this website:


If you check out this website, be sure to notice that the base keyword “lice” is located in the domain name and the keyword “faq” is located in the sub-domain. Having “faq” as a sub-domain (i.e. CNAME or canonical) informs search engines that this URL points to FAQ pages.


Glossary pages

Glossary pages attract a very specific type of user, so it is very important to consider how the person will be using your site. Glossary pages are best when you are trying to provide a resource for people that will frequently refer back to your site. These pages tend to obtain high bounce rates, but if done correctly, they also tend to cause individuals to repeatedly come back to your site. The goal here is not necessarily to obtain an instant conversion. Instead, your goal is to provide a valuable informational resource to people on a particular subject. As people land on your site, they quickly obtain the desired information and usually leave. However, with some strategy, you can still convert these visitors into people that explore your site in greater depth.

Strategy for converting visitors from glossary pages

However, to effectively design glossary pages, it is critical to also offer provide internal links to additional articles for interested readers. This allows casual readers to get desired information and leave while also providing additional resources for more interested readers.  This technique also allows us you to track conversions as people that click on that particular link. The best way to create glossary pages is to create one glossary page for each keyword or keyword phrase that you are targeting. Here is one of Google’s examples of a glossary.

The keys to successful web marketing and conversion tracking

This and the next three or so articles will be on the subject of web marketing. After that, I will shift my focus back to content regarding the political injustices we are experiencing, solutions to those problems, and strategies for improving the quality of government and health care in America. After all, if we cannot reach the right individuals, then we will never be able to change anything.

This article will be about optimizing ad performance and tracking conversion rates in a Google AdWords campaign. First of all, during the early stages of campaign and site design, it is best to use keyword phrases that have low quality scores. By low “quality scores,” I am referring to this definition of quality score: Google Quality Score video. “Why is this?” you may ask. The reason is this: You do not want to use your best keyword phrases in your ad campaign until you have had the opportunity to learn from your mistakes, improve the site design, improve the keywords in your site pages, improve the quality of your ad(s), and improve the content that you will use to lead users to desired pages from your landing page.

The reason for this strategy is because the success of your ads will affect the quality scores for the keywords you are targeting. If you target your best keywords when your site is not yet ready, your quality scores may drop from very high to very low scores. Those low quality scores will now make it much harder for you to compete for those keyword phrases because you now will need to pay much more to obtain top ad positions. If you have worked very hard to design the site perfectly – yet you still obtain poor quality scores for your desired keywords, then you may need to check your assumptions and go back to the drawing board. I recommend that you check your assumptions frequently. Every time I explore what’s underneath my assumptions, I learn something new. When there is a flaw in our logic, it’s usually because there’s a flaw in one of our assumptions. If the underlying functions are unknown, then by trying to figure out what those functions might be, that’s when we tend to learn the most.

The same rule is true when we apply conversion tracking techniques. Conversion tracking enables us to track events that result from a lead like a potential customer clicking on our ad.  What’s important during conversion tracking is to track the right elements. For example, if we apply the conversion tracking code to our website in the header or some other element that loads when the page loads, then we will increment our conversion tracker every time someone loads that page. Since you will already know how many people are clicking on your ad (since AdWords tells you), this is usually an undesirable result. Instead, you must apply the tracking code on exactly the element you want to track. For example, if you want to track when people click on your “contact us” link, then you must attach the Google Analytics tracker to your “contact us” link. If you are using the newer Universal Google Analytics, you can subscribe to events by using this technique.

Building an architecture for innovation

Architectural techniques for improving search engine rankings

Devin Bost

March 8, 2014

One of the great challenges for website developers is providing more content with less time. As languages, frameworks, and platforms continually evolve and become more advanced, it can be easy for a developer to feel somewhat lost among the myriad of options available. Questions that a developer might be faced with are, “How should I spent my time? What should I be learning now? And how will I know if I’m moving in the right direction?” Faced with many technological challenges, it can sometimes seem overwhelming for a developer to sacrifice the time required to add content to the website. When I say “content,” I am referring to articles, FAQs, and other information that will be attractive to search engines and visitors of the website.  Time focused on adding content may take away from time that could have been spent improving the infrastructure. Even after massive time spent writing articles, it is sometimes humbling to see your website only barely begin to obtain modest positions on various searches across the internet. Suddenly, when the web developer realizes that they need to spent time on infrastructural changes, such as preventing spam from filling up their contact forms, it can be very disappointing to see search engine rankings rapidly drop. So, you may ask, “What techniques are available to obtain sustainable growth of the numbers of visitors that find my website?” To this question, there is one ultimate answer: architecture.

I will address this topic by starting with a parable. Consider two architects, not software architects, but the kind that construct buildings. The first architect rushes through the design, obtains capital through loans, and quickly leverages available resources to start constructing the beams and walls of the building. The second architect spends much more time in the early design stages. The second architect ensures that every pipe, every room, every door, and every last possible detail are considered. The second architect performs a thorough evaluation of the climate, soil, and risks of major disaster. The second architect doesn’t begin building until absolutely certain that the building has a firm foundation, a foundation that will not fall. Now let’s jump ahead in time. The first architect has constructed nearly half of the building, but he now realizes that there is a problem with the layout of some of the plumbing. To continue with construction, the top half of the building will need to be redesigned. The amount of money required to perform these design changes will be massive, and without starting over, certain structural consequences of the redesign will render the building weak to natural disaster. This building has been built on a sandy foundation. The second architect, however, was much more careful. Every decision was made with the utmost analysis and planning. As a consequence, the foundation was constructed with the future in mind. The construction was able to occur with much greater organization and cost savings. Once the construction gained momentum, milestones began occurring ahead of schedule. The second architect’s building was built on a sure foundation.

Software architecture has many similarities to the architecture involved in commercial buildings. With a good software framework, the developer may accomplish much. I have seen many developers write web applications in PHP only to later realize that it would be very complicated to build software applications that interoperate seamlessly with their website code. Nonetheless, the key to obtaining sustainable rankings in major search engines depends not upon the content you deliver, but instead upon your ability to allow your users to create content. Think about some of the largest and most successful websites on the internet. How many of them became successful without providing an interactive service? The best software architecture is designed with usability in mind. Is your website simply acting like a billboard? Or does your website provide a service that your users need to have? You must consider what the motivations of your users will be. Many websites want to tell their users what to do and what to believe. But far fewer are the websites that listen to their users and act upon what their users say. Take some time and think about the ten most successful websites or web technologies that you have ever seen. How much of their content is generated by their users?

This architecture is the key to obtaining sustainable search engine rankings. Without good architecture, your site is built upon a sandy foundation.


Search Engine Optimization techniques

Devin Bost


One may ask, “How do we measure the results of our search engine marketing/optimization?” Measuring data requires Google Analytics. In this article, we will assume that Google Analytics has already been configured. According to what data Google Analytics provides, the process for improving site metrics is as follows:

  1. First, we setup filters.  We use filters to isolate traffic in the following areas (in order of importance from highest to lowest):
    1. Organic search results;
    2. Paid search results (this only applies when using pay per click advertising with Google AdWords);
    3. Unique new visitors from non-search engine sources.
  2. Second, we set our metrics (external variables). We should setup different metrics for each advertising campaign. We should also setup different metrics for organic search traffic. There are several benchmarks (variables) I like to collect data for:
    1. Number of unique visitors, or unique visitor count;
    2. Unique page view count, partitioned by ;
    3. Hit count, partitioned by landing page URL (filtered to display only pages generating one or more unique visits);
    4. Hit count, partitioned first by keyword phrase (the search term used to land on a page); then, partitioned by landing page URL (the URL the search brought them to);
    5. Relative position of ranked pages on Google, weighted according to their position (with an exponential decay model I developed);
    6. Return visit count, partitioned by IP address;
    7. Bounce rates:
      1. Partitioned by keyword phrase, then landing page URL, then by number of internal links (aka layer count) clicked on;
      2. Partitioned by landing page URL, then keyword phrase;
    8. Visitor count, partitioned by backlink URL. These are visitors that landed on our site by following a link from someone else’s website, and according to (Brin & Page, 1998), backlinks have been important since the creation of Google’s search algorithm.
  3. Third, we set our internal variables. These are what we generate internally. This technique becomes invaluable once our external variables begin exhibiting acceleration; then, we may use mathematical techniques to gain insights into how our changes to page content (internal variables) affect our external variables. It is very important that changes to site content are tracked. It can become very hard to assess rankings when it is unclear which version of a particular page was responsible for obtaining a top ranking. For this reason, it is very important that revision control is tracked across the site. HTML tags must be analyzed and tracked. Here are descriptions of how these are used:
    1. Title tag: It defines the page title and it communicates what the page is about to the search engines. The target keyword must be included in this tag. It is displayed to Google search users, so it is important that we apply some practical psychology here;
    2. Description meta tag: It provides a summary description of the web page and its contents. Also, this description appears (in most cases) in Google search results, just below the title; target keyword must be included in this tag;
    3. URLs: An optimized URL is one that is self-explanatory and self-documenting; target keyword must be included in this tag;
    4. Heading tags: These tags are used to emphasize important text and a hierarchical structure of keywords on the web page. Heading tags also inform the search engines how the page content is organized. The <h1> tag defines the most important keywords and <h6> defines the least important keywords;
    5. Keyword placement: This data will be more relevant when we start clustering keywords for strategic optimization on keyword stems. There are several techniques which may be used on this data, depending on how we implement clustering. We can use neural networks and natural language processing for this later on. Using language processing techniques is very easy when content is stored in a database that offers out-of-the-box text processing features;
    6. Content keyword density: According to (Parikh & Deshmukh, 2013), search algorithms place great emphasis on keyword density. It is important that targeted keywords have greater density in the content involved;
    7. Use of robots.txt: The robots.txt file gives directions to search engines regarding which pages or directories should be crawled. Having this file configured correctly will make sure all optimized pages will get indexed;
    8. Images: Using the image alt attribute to provide an accurate description of the image being used; target keyword should be used in the description, if possible. The alt attribute from the <img> tag specifies the alternate text of what the image contains if the image is displayed incorrectly or doesn’t load. It is also used by content readers for people with disabilities;
    9. Use of the “rel=nofollow” attribute: In a HTML anchor tag <a>, the rel attribute defines the relationship between the current page and the page being linked to by the anchor. The nofollow value signals web spiders not to follow the link. In other words, it tells Google that your site is not passing its reputation to the page or pages linked to;
    10. Sitemaps: Keeping the sitemap updated is key for good site rankings. Search engines depend upon sitemaps to tell them what the current web pages are for the website;
    11. Time interval. This is the frequency by which we take measurements. Monthly is fine initially. Once we have enough data to observe our rates of change, we can change our interval to weekly;
  4. We will track internal links and external links once we have traffic that doesn’t bounce. We will discuss this more later. External linking is considered off-site SEO. Important factors, although rather difficult to track are:
    1. Keyword in the backlink: Google’s ranking algorithm places high value on the text that appears within the link. The text within the link gets associated with the page and describes the page it links to. For this reason, it’s important to have the target keyword within the text of the backlink;
    2. Gradual link-building: It’s important to build backlinks in a gradual manner. The link-building process should be natural and steady. It is for this reason that SEO takes a lot of work and patience to implement Furthermore, it’s an intentional part of Google’s strategy for gradual reputation building that it not be quick or overnight. In fact, if a site were to acquire dozens or hundreds of backlinks overnight, Google would almost certainly consider this a red flag (spam) that most likely will get your site penalized. But if the site content is compelling, people can find it through search (or through other means) and link to it. If this occurs, the site owner has no control of the amount of backlinks that the site will generate, and certainly Google can detect that these backlinks weren’t intentional.
    3. Writing articles to establish domain authority: Writing articles and getting them published on other reputable sites, is a strategy that can help your site get backlinks. Getting an article published on trusted sites such as About.com, Wikipedia.org or NewYorkTimes.com and getting a backlink in return, will help increase your website’s reputation and achieve higher rankings;
    4. Personal networking to establish a reputation: It is recommended that we make efforts to reach out to those in the site’s community, particularly sites, “that cover topic areas similar to [ours]. Opening up communication with these sites is usually beneficial.” (www.google.com/webmasters/docs/search-engine-optimization-starter-guide.pdf) Contacting sites that are related to what your site is about is a great way to network, promote and increase your site’s exposure;
    5. Finding your website’s natural affinity group: Find websites that are related or cover similar topics as yours for potential networking opportunities. Getting backlinks off-topic sites do not count as much as links from sites that have related content to yours.


Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual Web search engine. Computer networks and ISDN systems, 30(1), 107-117.

Parikh, A., & Deshmukh, S. (2013, November). Search Engine Optimization. International Journal of Engineering Research and Technology, 2(11), 3146-3153.