Tuesday, April 8, 2008 by Mistlee


Why Starting A Relationship With The Search Engines Can Be Hard

Posted: 08 Apr 2008 10:34 AM CDT

se submission In a singles bar, patrons typically have the same sorts of insecurities running through their heads:

“I’ve been hurt before.” “Should I trust her?” “Can I let my walls down and let him in?”

These insecurities really get in the way of finding a happy, lasting relationship.

Search Engines have a lot in common with singles. They’ve been hurt, and they aren’t so quick to trust any more.

What You Need To Know About The Search Engines’ Pasts

Search engines have had a difficult relationship history. They’ve tried to be open-hearted and trusting. But unethical website owners, spammers, and other cheaters have used their trusting ways against them and worked their way into the top of the rankings with lies.

And search engines get hurt by that. Their job, above all else, is to help the people using them to search for information. When spammers rather than valid websites get to the top, the search engines can’t do their jobs well. That frustrates searchers, and the search engines don’t help them find what they need.

So how can you get the search engines to trust your website enough to list it? The answer is to know what they are looking for when you start a relationship with them.

Don’t just give them another line (of code)

The search engines no longer use the META Keyword tag when ranking your site. This tag is hidden at the top of your website’s HTML. It used to be one of the major areas the search engines looked at when determining what your site was all about.

The intent of this tag was for site owners to list words and phrases explaining what their sites were about. But it was rarely used in ethical ways. People would list terms they thought would help their sites come up in popular searches “free,” “money,” and “sex” were all popular back in the early days of the Internet.

After a while, this practice was so widespread that the search engines decided to do something about it. They stopped looking at this tag at all.

These days, the most common use of the META Keyword tag is your competitors researching your optimization strategy. If you already use a META Keyword tag, consider having it removed. If you’re coding or optimizing your website, leave it off and concentrate on putting your keywords in the body text of your website instead where they belong.

They want you to put all your cards on the table.

A while ago, some website owners decided they wanted to have very little copy on their sites. They still wanted to come up strong on the search engines. The search engines were no longer really looking at the META tags. The problem was how to have a site that looked as though it had very little text but still had enough information for the search engines to give it a good ranking.

Some of these website owners came up with what they thought was a brilliant solution. They’d add more text to the site at the bottom of the page below the copyright. And they’d make this text the same color as the background which basically meant that it was invisible to human website visitors. But the search engines would still see it in the code, read it, and count it toward their ranking.

The search engines caught on to this little trick pretty quickly. If they catch you at it, they’ll blacklist your site, which means that they’ll kick it out of their results listing. And it will be really difficult to get back in! So make sure our site isn’t using any invisible text and that all the text on the site is visible to humans and the search engines.

They hate doors and mirrors.

Have you ever searched for something online and clicked on what you thought was the perfect match, only to come to what was little more than a full-page advertisement or a bunch of links? Disappointing, isn’t it? And then there’s a link at the bottom of the page pointing to the site you originally thought you were originally going to? By this point, you already have a sour taste in your mouth.

Or you’re looking for a piece of information and you click a couple of links that look like exactly the same site with slightly different words. You get a sense of deja vu and start to feel a bit like you’re losing your mind, right?

This tactic is called doorway and mirror sites. The search engines feel exactly the same way that you do about them because, again, the search engines want searchers to be happy with the results they get from their queries, not disgruntled or confused.

When you start your relationship with the search engines, make sure that your site optimization strategy focuses on ethical ways of getting their attention like being content-rich, having links with other high-quality sites (not link farms!), and updating your site on a regular basis. Those tactics will ensure that you start a love affair with the search engines instead of just heading for a bad breakup.

Erin Ferree is a brand identity designer who creates big visibility for small businesses. As the owner of elf design, Erin is passionate about helping her clients stand out in front of their competition and attract more clients. One of the best ways to do that is with Search Engine Optimization, which you can learn about in her eLearning product, Raise Your Ranking, which is available at

Avoid Duplicate Content Penalties

Posted: 08 Apr 2008 10:30 AM CDT

writing.jpgLarge search engines attempt to filter their search results by removing any results that duplicate the content of other search results. Such filtering is referred to as “duplicate content penalty”.

It is important to understand and identify what “duplicate content” actually is. Duplicate content is generally defined as substantive blocks of text that are copied from one site to another. Some webmasters try to use duplicated content in an attempt to manipulate and influence search engine rankings. The search community still occasionally debates the legitimacy and existence of duplicate content filters, but whether they exist today, or will exist tomorrow, is really irrelevant. Most webmasters have simply accepted the fact that the duplicate content penalty is currently enforced by at least some of the major search engines.

With that in mind, how does the search engine determine which version of the content is the original, and which is duplicated? It is difficult for the search engine to tell which website is responsible for the original version of any content, and some innocent websites might find themselves penalized or banned for including duplicated content. After analyzing the behavior of search engines, it is safe to assume that the search engines will often retain the content listing from what it considers to be the most ‘trusted’ source. They may look at the number of incoming related links, the age of the domain, or any other SEO factors that reinforce the reputation of the domain that contains the duplicated content. If one of the ‘copies’ is considered by the search engine to be from a reputable source, they my find themselves ranking well, while the actual source of the ‘original’ version may find themselves unjustly banned or penalized.

Representatives from the major search engines have all made it clear that they prefer search engines that contain unique content. Webmasters who want to avoid any current or future bans will do well to follow these simple guidelines in order to avoid duplicate content penalties:

1. Redirects

If you redesign your website, use permanent 301 redirects. Redirects are a legitimate way of routing web traffic.

2. Unique

Each page within a website should be unique. The focus of each page on a website, even if it’s similar to the theme of another page, must contain unique and original content.

3. Multi-Language

If there are multiple language versions of a website, consider using a different domain for different versions; search engines do not view an article translated into a variety of foreign languages as being duplicated content — each language version is unique content in the eyes of the search engine.

4. Unique Meta Tags

Each web page should contain unique meta tags.

5. Robots.txt

If you do have intentional duplicate content on your website, be sure to have a “robots.txt” file for your site to prevent the search engines from indexing the areas with duplicated content (or any areas of the website that you wish to remain private, for that matter).

6. Affiliate Twist

If you are promoting products or services using an affiliate program, use unique and distinctive product descriptions and web copy. If you simply use the same descriptions provided by the product owner or service provider, it’s very likely that your copy could be viewed as duplicated content.

7. Copyright

Include a copyright notice on your website.

8. Enforce

If you discover that another website is scraping your unique web content and replicating it, enforce your copyright! Use CopyScape at , or use their “copy sentry” service to receive notification of any infractions. If you discover a copyright violation, contact the website and politely request appropriate changes.

If the changes are not made in a reasonable and satisfactory amount of time, contact the ISP (web host) of the infringing site, and file a DMCA complaint with Google .

9. Avoid Identical Content

Do everything you can to avoid serving a web page that contains content identical or closely related to another page. If for some reason you have two pages that contain identical content, use a robots.txt to block the search engines from spidering one version of the page.

Other Tools:

Duplicate Page Checker -

While it may still be debatable whether all the major search engines currently employ a duplicate content penalty, all have made it abundantly clear that they do not have any desire to provide search results that rehash the same content over and over. Actively avoid any potential penalties by taking a proactive approach to building unique content.

Sharon Housley manages marketing for FeedForAll software for creating, editing, publishing RSS feeds and podcasts. In addition Sharon manages marketing for RecordForAll audio recording and editing software.

Some Great Ways To Get Website Traffic for Free

Posted: 08 Apr 2008 10:25 AM CDT

website trafficRegardless of how good your website is, how good your sales copy is, how good or attractive your product is, unless you generate traffic you will never make money online. This article is going to cover some ways for you to drive traffic to your website for free.Now, if you are like many others you simply don’t have the spare cash to pay out to someone to drive traffic to your website for you. So, you must be willing to learn and put in some effort in understanding what you need to do to get traffic for free. Be prepared because it will take work and you will need to commit to it at the beginning. Once you have mastered the techniques, you can start to ease off some as many of the techniques will start to work on auto-pilot for you.

Being able to get website traffic for free will take a little time to build up. Without question, paying for traffic will get you instant results, but it is very important you learn how to get traffic for free. Believe me, it is a great feeling when you get tons of traffic to your site, and you know you haven’t paid a penny for it!

The first place you should look at to get traffic for free is in forums. Depending on what you are selling, or what your website is offering I guarantee you there is a forum on the subject somewhere. Search out and find those forums using the search engines. Check them out, see what people are writing about or commenting on. Sign up to those forums for free and start answering questions at first without blatantly advertising your website. Within a short space of time, you will begin to get reputation in the forum as being an expert. Once you are recognized as an expert, you can then start adding your signature to all your comments. (more…)